• Pantagruel
    3.4k
    Did they? AI models typically use thousands or millions of datapoints for their algorithm. Did the programmers categorise all of them?Lionino

    Absolutely. A computer doesn't "decide" the meaning of a facial configuration. The only way a computer knows what is "happy" is if someone feeds it in say one-hundred pictures of "happy" faces and tags each one as "happy". Then you can feed it in millions more pictures, if you like, and refine it's capabilities. But if that is to work (backpropagation of error) then there has to be some standard against which to "correct" the input. i.e. someone still has to correctly identify when the computer makes a mistake in categorizing a happy face, and propagate that error back through the neural network architecture.
  • Lionino
    2.7k
    The only way a computer knows what is "happy" is if someone feeds it in say one-hundred pictures of "happy" faces and tags each one as "happy". Then you can feed it in millions more pictures, if you like, and refine it's capabilities.Pantagruel

    I just don't think that happened because programmers don't spend their time tagging millions of data-points, it is usually that the data-points are externally-sourced and the tags come with them. On this topic, the pictures would come from Google and the tag would be whatever emotion name was in the query. I will change my mind if you provide a source.
  • Pantagruel
    3.4k
    Here's an example.

    https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3281765#:~:text=First%2C%20AIs%20could%20display%20a,the%20doubt%E2%80%9D%20in%20uncertain%20situations.

    However, re. your argument, there is absolutely no question that people are categorizing things like this. If you are talking purely material states, ok sure, you can accurately and comprehensively identify a material configuration, circles, magnetic profiles of ships, you name it. But anything that involves a human evaluation for which there can be a range of interpretations is subject to this bias. It's absolutely inherent in the nature of neural networks.

    https://theconversation.com/emotion-reading-tech-fails-the-racial-bias-test-108404

    https://www.nytimes.com/2016/06/26/opinion/sunday/artificial-intelligences-white-guy-problem.html

    The dataset can never be more neutral than what is being evaluated. People are interested in categorizing what people do in order to get them to do certain things. It's probably the most subjectively-laden data-interpretation that there is (even when it comes to identifying what constitutes data). Ask ChatGPT if its answers could be subject to unknown selection biases its developers may have passed on to it accidentally through data-categorization.

    edit. I just did it. Here's what it told me.

    So if that's the case, then couldn't your ability to identify something be subject to unknown selection biases your developers may have passed on to you accidentally through the initial data selection and identification?

    ChatGPT
    Yes, you've touched upon a crucial aspect of machine learning systems, including neural networks: the potential for biases to be inadvertently introduced through the training data and labeling process. These biases, which may be present in the data collected or the decisions made during the training phase, can influence the behavior and performance of the model, potentially leading to unintended consequences. Here are some key points to consider:

    Data Selection Bias: The selection of training data plays a significant role in shaping the behavior of a machine learning model. If the training dataset is not representative of the target population or contains inherent biases, the model may learn to replicate and amplify these biases in its predictions. For example, if a facial recognition system is trained predominantly on images of individuals from certain demographic groups, it may perform poorly on individuals from underrepresented groups.

    Labeling Bias: The process of labeling training data can also introduce biases if the criteria for assigning labels are subjective or influenced by human judgment. Annotators may inadvertently introduce their own biases or assumptions when labeling data, leading to inaccuracies or distortions in the training dataset. For instance, if annotators have preconceived notions about certain concepts or categories, they may label data in a way that reflects these biases.

    Algorithmic Bias: Even if the training data is unbiased, the algorithms used to train machine learning models may exhibit bias due to the optimization process or inherent limitations of the algorithm. For example, certain types of neural network architectures may be more prone to amplifying biases present in the data, or the choice of loss function used during training may inadvertently prioritize certain types of errors over others.

    Mitigation Strategies: To address these concerns, developers and researchers employ various strategies to identify, mitigate, and address biases in machine learning systems. These may include carefully curating diverse and representative training datasets, implementing fairness-aware algorithms that explicitly account for biases, and conducting rigorous evaluation and testing to assess the performance of the model across different demographic groups and use cases.

    Overall, while machine learning systems offer powerful tools for pattern recognition and decision-making, it's important to recognize and mitigate the potential for biases to influence their behavior. By taking proactive steps to address biases in the data, algorithms, and decision-making processes, developers can help ensure that machine learning systems are fair, transparent, and accountable.
  • Count Timothy von Icarus
    2.7k


    I am also a techno-animist, so to me the technology goes beyond being only a tool. More like a gateway.

    A gateway to what? And is it always necessarily a gateway to something good? Doors are useful, but so are walls. Sometimes we'd prefer our gates stay closed, it depends on what is outside.

    Another way of saying what this thread is all about, is to state that (in my opinion, experience and observation) atheist animism is the default worldview of humanity.

    I'm not really sure what that's supposed to mean. Both animism and religion are ubiquitous in early human cultures.

    What is a techno-animist? You think technology develops because the technology wants this to happen? An axe chops because it wants to chop? A car drives because it wants to drive in the same way an animist will say a cloud is dark because 'the sky is sad?'"

    I could certainly see the merits of an information theory/Hegelian informed approach where the development of technology is part of larger historical processes that guide human civilization, but not one where individual instances of technology are in any way animate, or even possessing internal purposes. I think Aristotle is right to identify artifacts as unique in that their telos/purpose lies external to them. It is the man who makes the axe who decides that "an axe is for chopping."
  • Lionino
    2.7k

    It does not seem like any of the links prove that programmers' beliefs are affecting the AI. The first one simply states correctly that the AI works in a racially biased way. While the other two just seem to me like the usual screeching of Anglosphere leftists whenever some fact of science or technology does not agree with their politically formed confusions about the world — all the time since 2015. What is apparent is the contrary, that these articles are calling for the direct injection of human aesthetic preferences into the code.

    Ask ChatGPT if its answers could be subject to unknown selection biases its developers may have passed on to it accidentally through data-categorization.Pantagruel

    ChatGPT is programmed to give the most milquetoast, politically neutral, common sense answers to any given question. Whatever it is that you ask it, if there is a slight chance of controversy, it will start with a disclaimer. ChatGPT is also not rational
    https://chat.openai.com/share/96378835-0a94-43ce-a25b-f05e5646ec40
    https://chat.openai.com/share/b5241b53-e4d8-4cab-9a81-87fa73d740ad

    In any case, I still have not seen any proof that programmers are categorising their own data by hand.
  • Metaphysician Undercover
    13.1k
    Or would it be more appropriate to say that advancing technology is good in virtue of something else? It's obviously much more common to argue the latter here, and the most common argument is that "technological progress is good because it allows for greater productivity and higher levels of consumption."Count Timothy von Icarus

    I think this is a good point. It is not technology itself which can be judged as good or bad, but the way that it is used, which is judged as good or bad. Technology can be used in bad ways as well as good ways, and certain technologies could be developed specifically toward evil ends. The point being that the purpose of human existence is not to produce technology, it is something other than this, so technology is only good in relation to this other purpose, regardless of whether we know what it is, or not.
  • Pantagruel
    3.4k
    In any case, I still have not seen any proof that programmers are categorising their own data by hand.Lionino

    I'm sorry, perhaps you just do not understand the way neural networks function. Do you think that the data categorizes itself? This isn't a subject of debate, it is how they work. I've provided excellent, on-point information. Beyond that, I suggest studying the "training up phase" of neural network design:

    Usually, an artificial neural network’s initial training involvesbeing fed large amounts of data. In its most basic form, this training provides input and tells the network what the desired output should be. For instance, if we wanted to build a network that identifies bird species, the initial training could be a series of pictures, including birds, animals that aren’t birds, planes, and flying objects.

    Each input would be accompanied by a matching identification such as the bird’s name or “not bird” or “not animal” information. The answers should allow the model to adjust its internal weightings to learn how to get the bird species right as accurately as possible.

    https://blog.invgate.com/what-is-neural-network

    I've been studying neural networks since the 1990's, long before they were popular, or even commonly known.

    There is no "AI" my friend. All there is is "pattern-matching" software, that has been trained based on pre-selected data, which selection has been made by biased human beings. The "Big-AI" players are all commercial enterprises. Do you not think that there are massive agendas (aka biases) skewing that data? Come on.

    the AI system trained with such historical data will simply inherit this bias

    This study shows that a bias (error) originally inherited by the AI from its source data in turn is inherited as a habit by students who learned a diagnostic skill from the AI.

    So technology is actually amplifying an error, and the confidence which people have in it only exacerbates the extent to which this is a problem. Which is what I originally suggested.
  • bert1
    2k
    We should never have invented agriculture. I think we're fucked. We have outpaced our environment, and our DNA, disastrously. Amish-style is a good workable compromise I think.
  • Pantagruel
    3.4k
    We should never have invented agriculture.bert1

    The Dawn of Everything evaluates this position, and also explores the unique power of the indigenous world view through some historical analysis informed by native sources and details.
  • bert1
    2k
    Yes, not heard of that one but I've come across the idea in a few places.
  • Lionino
    2.7k
    an artificial neural network’s initial training involvesbeing fed large amounts of dataPantagruel

    I don't think anything in what I said would suggest that I don't know this. Your point is that the AI returns certain outputs because of researcher bias in categorising data-points. Large AI models receive billions and billions of data-points. The researchers' do not categorise the data-points themselves. One can train an AI on emotions by feeding them Google pictures. Images whose query was "angry" will be categorised as angry, images whose was "happy" will be such. Any output that the AI might show could only represent dataset bias, not researcher bias.

    the AI system trained with such historical data will simply inherit this biasPantagruel

    The article you linked sets out to show that humans may inherit the biased information given to them by an AI (duh), not that AI inherits human bias. :meh:
    Moreover, Lucía Vicente and Helena Matute are psychologists, not people who would know about the workings of AI.

    Reveal
    In a series of three experiments, we empirically tested whether (a) people follow the biased recommendations offered by an AI system, even if this advice is noticeably erroneous (Experiment 1); (b) people who have performed a task assisted by the biased recommendations will reproduce the same type of errors than the system when they have to perform the same task without assistance, showing an inherited bias (Experiment 2); and (c) performing a task first without assistance will prevent people from following the biased recommendations of an AI and, thus, from committing the same errors, when they later perform the same task assisted by a biased AI system.
    If anything, the researchers are simply pointing out that people believe in AIs more than they should.
    Mrs Vicente is a student while Mrs Matute is a senior researcher. This seems to be junk research made with the intent of boosting Mrs Vicente's resume — nothing wrong with that, that is just how the academy works nowadays, but worth pointing out.


    I've been studying neural networks since the 1990's, long before they were popular, or even commonly known.

    There is no "AI" my friend
    Pantagruel

    Oh, please.
  • Pantagruel
    3.4k
    The article you linked sets out to show that humans may inherit the biased information given to them by an AI (duh), not that AI inherits human bias. :meh:Lionino

    Which bias originally derived from the biased input data, as is in the article.

    Like I said, it's a fact. Do some reading. "Training up" a neural net. Categorization is supplied, it's not intrinsic to the nature of a picture Copernicus.
  • Lionino
    2.7k
    Which bias originally derived from the bias input data, as is in the article.Pantagruel

    No, the article has zero to do with the topic in hand.
    In the classification task, participants were instructed to observe a series of tissue samples, to decide, for each sample, whether it was affected or not by a fictitious disease called Lindsay Syndrome. Each tissue sample had cells of two colours, but one of them was presented in a greater proportion and volunteers were instructed to follow this criterion to identify the presence of the syndrome.

    Like I said, it's a fact. Do some reading.Pantagruel

    What is the authority of such a statement? Someone who has "studied neural networks since the 90s"
    Reveal
    (whatever that means, software engineering is worlds different since then)
    but has not displayed knowledge in programming? I mean, if you are "studying" neural networks for 30 years I would expect you to be at least advanced in three different programming languages. Is that really a unrealistic expectation to have?

    "Training up" a neural net.Pantagruel

    This would be the 4th time I reply to the same disingenuous point in this conversation.

    Categorization is supplied, it's not intrinsic to the nature of a picture Copernicus.Pantagruel

    Yes, supplied by external sources, not by the researchers. There, the fourth time.
  • Pantagruel
    3.4k
    What is Pattern Recognition?
    Pattern recognition is a process of finding regularities and similarities in data using machine learning data. Now, these similarities can be found based on statistical analysis, historical data, or the already gained knowledge by the machine itself.

    A pattern is a regularity in the world or in abstract notions. If we discuss sports, a description of a type would be a pattern. If a person keeps watching videos related to cricket, YouTube wouldn’t recommend them chess tutorials videos.


    Examples: Speech recognition, speaker identification, multimedia document recognition (MDR), automatic medical diagnosis.

    Before searching for a pattern there are some certain steps and the first one is to collect the data from the real world. The collected data needs to be filtered and pre-processed so that its system can extract the features from the data. Then based on the type of the data system will choose the appropriate algorithm among Classification, Regression, and Regression to recognize the pattern.

    https://www.analyticsvidhya.com/blog/2020/12/an-overview-of-neural-approach-on-pattern-recognition/


    Filtering and pre-processing means identifying exactly how the training data fits the data-categories for which the neural network is to be trained.

    I'll ask it one more time: How do you think the computer system gains the initial information that a certain picture represents a certain thing? It does not possess innate knowledge. It only knows what it has been told specifically. I know how it's done, its done by training up the system using a training dataset in which the data is identified. The classic example is the mine-rock discriminator. Sonar profiles of "known mines" are fed into the system. Along with sonar profiles of "known rocks". These are pre-categorized by the developers. After that, the neural network is fed novel data, which it then attempts to categorize. If it is wrong, the error is "back-propagated" across the network to correct the "weights" of the hidden-architecture neurons. And this back-propagation is ALSO a manual function, since the computer does not know that it is making an error.

    Training an Artificial Neural Network.In the training phase, the correct class for each record is known (this is termed supervised training), and the output nodes can therefore be assigned

    Yes, supplied by external sources, not by the researchers. There, the fourth time.Lionino

    The people developing the neural net (aka the developers) are the external sources. Who else do you think, the neural-net police? The bureau of neural net standards? Jeez. Here's wikipedia on Labeled_data

    Labeled data is a group of samples that have been tagged with one or more labels. Labeling typically takes a set of unlabeled data and augments each piece of it with informative tags. For example, a data label might indicate whether a photo contains a horse or a cow, which words were uttered in an audio recording, what type of action is being performed in a video, what the topic of a news article is, what the overall sentiment of a tweet is, or whether a dot in an X-ray is a tumor.
    Labels can be obtained by asking humans to make judgments about a given piece of unlabeled data. Labeled data is significantly more expensive to obtain than the raw unlabeled data.


    Anyway, to the OP in general, I think I've conclusively and exhaustively demonstrated my point. And illustrated the very real dangers of a naive techno-optimism. If anything, we should be constantly tempering ourselves with a healthy and ongoing attitude of informed techno-skepticism.

    One final cautionary. I worked as a technical expert in the health-care industry until this year, so I've seen a couple of these studies circulated on "baked-in" AI bias.

    For example, if historical patient visits are going to be used as the data source, an analysis to understand if there are any pre-existing biases will help to avoid baking them into the system,
12Next
bold
italic
underline
strike
code
quote
ulist
image
url
mention
reveal
youtube
tweet
Add a Comment

Welcome to The Philosophy Forum!

Get involved in philosophical discussions about knowledge, truth, language, consciousness, science, politics, religion, logic and mathematics, art, history, and lots more. No ads, no clutter, and very little agreement — just fascinating conversations.