Tech

Big Tech Ditched Trust and Safety. Now Startups Are Selling It Back As a Service


The same is true of the AI systems that companies use to help flag potentially dangerous or abusive content. Platforms often use huge troves of data to build internal tools that help them streamline that process, says Louis-Victor de Franssu, cofounder of trust and safety platform Tremau. But many of these companies have to rely on commercially available models to build their systems—which could introduce new problems.

“There are companies that say they sell AI, but in reality what they do is they bundle together different models,” says Franssu. This means a company might be combining a bunch of different machine learning models—say, one that detects the age of a user and another that detects nudity to flag potential child sexual abuse material—into a service they offer clients.

And while this can make services cheaper, it also means that any issue in a model an outsourcer uses will be replicated across its clients, says Gabe Nicholas, a research fellow at the Center for Democracy and Technology. “From a free speech perspective, that means if there’s an error on one platform, you can’t bring your speech somewhere else–if there’s an error, that error will proliferate everywhere.” This problem can be compounded if several outsourcers are using the same foundational models.

By outsourcing critical functions to third parties, platforms could also make it harder for people to understand where moderation decisions are being made, or for civil society—the think tanks and nonprofits that closely watch major platforms—to know where to place accountability for failures.

“[Many watching] talk as if these big platforms are the ones making the decisions. That’s where so many people in academia, civil society, and the government point their criticism to,” says Nicholas,. “The idea that we may be pointing this to the wrong place is a scary thought.”

Historically, large firms like Telus, Teleperformance, and Accenture would be contracted to manage a key part of outsourced trust and safety work: content moderation. This often looked like call centers, with large numbers of low-paid staffers manually parsing through posts to decide whether they violate a platform’s policies against things like hate speech, spam, and nudity. New trust and safety startups are leaning more toward automation and artificial intelligence, often specializing in certain types of content or topic areas—like terrorism or child sexual abuse—or focusing on a particular medium, like text versus video. Others are building tools that allow a client to run various trust and safety processes through a single interface.

newsofmax

News of max: Update the world's latest breaking news online of the day, breaking news, politics, society today, international mainstream news .Updated news 24/7: Entertainment, Sports...at the World everyday world. Hot news, images, video clips that are updated quickly and reliably

Related Articles

Back to top button
Immediate Matrix Immediate Maximum
rumi hentai besthentai.org la blue girl 2 bf ganda koreanporntrends.com telugusareesex hakudaku mesuhomo white day flamehentai.com hentai monster musume سكس محارم الماني pornotane.net ينيك ابنته tamil movie downloads tubeblackporn.com bhojpuri bulu film
sex girel pornoko.net redtube mms odia sex mobi tubedesiporn.com nude desi men صور سكسي متحركه porno-izlemek.net تردد قنوات سكس نايل سات sushmita sex video anybunny.pro bengali xxx vido desigay tumblr indianpornsluts.com pakistani escorts
desi aunty x videos kamporn.mobi hot smooch andaaz film video pornstarsporn.info tamil sexy boobs internet cafe hot tubetria.mobi anushka sex video desi sexy xnxx vegasmovs.info haryana bf video 黒ギャル 巨乳 無修正 javvideos.net 如月有紀