Tech

AI bias can arise from annotation guidelines – TechCrunch


Research in the areas of machine learning and AI, now a key technology in practically every industry and company, is too large for anyone to read. This column, Perceptron (formerly Deep Science), aims to collect some of the most relevant recent discoveries and papers – especially, but not limited to artificial intelligence – and explain why they are important.

This week, in the journal AI, a new study shows how bias, a common problem in AI systems, can start with instructions given to people employed to Annotate the data that the AI ​​system learns to make predictions. The co-authors found that annotators choose samples in the tutorial, which causes them to contribute annotations that then become over-representative in the data, making the AI ​​system more inclined to these annotations. .

Many AI systems today “learn” to understand images, videos, text, and audio from labeled examples by annotators. The labels allow the system to extrapolate relationships between the examples (e.g., the link between the caption “kitchen sink” and the photo of the kitchen sink) to data the system has never seen before ( e.g. kitchen sink shots are not included in the data used to “teach” the model).

This works remarkably. But annotation is an imperfect approach – annotation introduces biases that can penetrate the trained system. For example, studies have shown that average caption more likely to label phrases in African American Native English (AAVE), informal grammar used by some black Americans, as malicious, leading toxicity detectors of a label-trained AI to consider AAVE disproportionately malicious.

As it turns out, the annotator’s predisposition may not be solely to blame for the presence of bias in the training labels. In a pre-print learn in addition to Arizona State University and the Allen Institute for AI, researchers investigated whether the source of the bias might lie in instructions written by dataset creators to serve as instructions for annotators. are not. Such instructions usually include a short description of the task (e.g. “Tag all the birds in these pictures”) along with some examples.

Parmar et al.

Image credits: Parmar et al.

The researchers looked at 14 different “benchmark” datasets used to measure the performance of natural language processing systems or AI systems that can classify, summarize, translate, and analyze analyzing or manipulating text. In studying the task instructions provided for annotators working on the dataset, they found evidence that the instructions influenced the annotators to follow specific patterns. , which are then propagated to data sets. For example, more than half of the annotations in Quoref, a dataset designed to test the understanding of an AI system when two or more expressions refer to the same person (or thing), let’s start with the phrase “What is the name”. a phrase included in one-third of the instructions for the dataset.

The phenomenon, which the researchers call “instruction bias,” is particularly worrisome because it suggests that systems trained on biased instruction/annotation data may not perform as well as they should. initial thought. Indeed, the coauthors found that instruction bias overestimates the performance of systems, and that these systems often fail to generalize beyond instruction patterns.

The bottom line is that large systems, like OpenAI’s GPT-3, are generally less sensitive to instruction bias. But this study serves as a reminder that AI systems, like humans, are prone to developing biases from sources that are not always obvious. The difficult challenge is to detect these sources and mitigate the impact downstream.

In a less serious paper, scientists from Switzerland conclude that facial recognition systems are not easily fooled by AI-edited actual faces. “Metamorphic attacks,” as they are called, involve the use of AI to modify photos on IDs, passports, or other form of identification document in order to bypass security systems. The co-authors created “morphs” using AI (Nvidia’s StyleGAN 2) and tested them with four state-of-the-art facial recognition systems. They claim that the morphs pose no significant threat, despite their lifelike appearance.

Elsewhere in the field of computer vision, researchers at Meta have developed an AI “assistant” that can remember the characteristics of a room, including the location and context of objects. , to answer questions. Details in a preprinted article, work may be part of Meta Project Nazare Augmented reality glasses development initiative that leverages AI to analyze surroundings.

Super Center AI

Image credits: Meta

The researchers’ system, designed for use on any wearable device equipped with a camera, analyzes footage to build “semantically rich and efficient scene memories” to “encoding space-time information about objects”. The system remembers the locations of objects and when they appear in the footage, and furthermore provides the basis for answering questions the user can ask about the objects in its memory. For example, when asked “Where was the last time you saw my keys?”, the System might indicate that the key was on the side table in the living room that morning.

Meta, which one reported plans to release full-featured AR glasses in 2024, announced plans for “selfish” AI last October with the launch of Ego4D, a long-term research project on “cognitive” AI. selfishness”. The company said at the time that the goal was to teach AI systems – among other tasks – to understand social cues, how the actions of an AR device wearer could affect their surroundings. them and how the hand interacts with objects.

From language and augmented reality to physical phenomena: an AI model has been very helpful in an MIT study of waves – how and when they break up. While it may seem a bit complicated, it’s true that wave modeling is needed both for building structures in and near water, and for modeling how the ocean interacts with the atmosphere in climate models. .

Image credits: MIT

Usually waves are approximated by a set of equations, but researchers train a machine learning model on hundreds of wave instances in a 40-foot water tank filled with sensors. By observing the waves and making predictions based on empirical evidence, and then comparing them with theoretical models, AI helped point out where the models were lacking.

A startup was born out of research at EPFL, where Thibault Asselborn’s PhD thesis on handwriting analysis was based. turned into a comprehensive educational app. Using algorithms he designed, the app (called School Rebound) was able to identify habits and remedies in just 30 seconds of a child writing on an iPad with a stylus. They are presented to children in the form of games that help them write more clearly by reinforcing good habits.

“Our scientific model and rigor are important and what sets us apart from other existing applications,” Asselborn said. “We have received letters from teachers who have seen their students make rapid progress. Some students even come to the front of the class to practice.”

Image credits: Duke University

Another new finding in elementary schools involves identifying hearing problems during routine screenings. These screenings, as some readers may remember, typically use a device called a sphygmomanometer, which must be operated by trained audiologists. Without being present, such as in an isolated school district, children with hearing problems may never receive timely help.

Samantha Robler and Susan Emmett at Duke decided to build a blood pressure monitor that basically works on its own, which sends the data to a smartphone app, where the data is interpreted using an AI model. Anything worrisome will be flagged and the child can be tested further. It’s not a replacement for a professional, but it’s a lot better and can help identify hearing problems much earlier in places where the right resources aren’t available.



Source link

newsofmax

News of max: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button
Immediate Matrix Immediate Maximum
rumi hentai besthentai.org la blue girl 2 bf ganda koreanporntrends.com telugusareesex hakudaku mesuhomo white day flamehentai.com hentai monster musume سكس محارم الماني pornotane.net ينيك ابنته tamil movie downloads tubeblackporn.com bhojpuri bulu film
sex girel pornoko.net redtube mms odia sex mobi tubedesiporn.com nude desi men صور سكسي متحركه porno-izlemek.net تردد قنوات سكس نايل سات sushmita sex video anybunny.pro bengali xxx vido desigay tumblr indianpornsluts.com pakistani escorts
desi aunty x videos kamporn.mobi hot smooch andaaz film video pornstarsporn.info tamil sexy boobs internet cafe hot tubetria.mobi anushka sex video desi sexy xnxx vegasmovs.info haryana bf video 黒ギャル 巨乳 無修正 javvideos.net 如月有紀