Instagram will begin asking some US users for race and ethnicity data in order to study how different groups experience the platform, the company says in a blog post published today.
A random assortment of Instagram users will get a pop-up in the app that leads to a survey asking for their race and ethnicity, hosted by the research group YouGov. Answering the questions is optional, and Instagram says responses “will not limit the experiences that you have on Instagram, including impacting your reach or how people engage with your content in any way.”
In a video message posted today, Instagram head Adam Mosseri said collecting the data will help the platform look for ways to improve Instagram for users.
“If we’re going to make sure that Instagram is fair and equitable as an experience, we need to understand how it is working for different communities,” he says.
After user responses are collected, the data will be de-identified, split, and stored across a handful of research institutions, including Texas Southern University, University of Central Florida, Northeastern University, and Oasis Labs. In the blog post, Instagram says individual responses will not be linked back to user accounts and that the company would only get aggregated data from the partnering institutions.
“This information will allow us to better understand the experiences different communities have on Instagram, how our technology may impact different groups, and if there are changes we can make to promote fairness,” the blog reads. “For example, the analysis we conduct with this information might help us better understand experiences different communities may have when it comes to how we rank content.”
In 2020, Instagram created an equity team tasked with studying its algorithms for racial bias. Last fall, Meta, which owns Instagram and Facebook, said it was working on a way to measure “how people from marginalized communities experience Meta technologies.”
Civil rights groups and other advocates have long called on Facebook and other social media platforms to examine how its systems affect people of color, and there are numerous media reports about how platforms have allowed for discrimination. A 2021 report from The Washington Post, for example, detailed how Meta’s internal findings that its systems around removing hate speech disproportionately harmed Black users weren’t shared with civil rights groups — and that executives balked at an aggressive plan aimed at removing the “worst of the worst” hate speech.
Credit: Source link
Comments are closed.