More than a year after its first civil rights audit, Meta says it’s still working on a number of changes recommended by auditors. The company released detailing its progress on addressing the auditors’ many recommendations.
According to the company, it has already implemented 65 of the 117 recommendations, with another 42 listed as ”in progress or ongoing.” However, there are six areas where the company says it is still determining the “feasibility” of making changes and two recommendations where the company has “declined” to take further action. And, notably, some of these deal with the most contentious issues called out in the original 2020 audit.
That original report, released in July of 2020, found the company needed to do more to stop “pushing users toward extremist echo chambers.” It also said the company needed to address issues related to algorithmic bias, and criticized the company’s handling of Donald Trump’s posts. In its , Meta says it still hasn’t committed to all the changes the auditors called for related to algorithmic bias. The company has implemented some changes, like engaging with outside experts and increasing the diversity of its AI team, but says other changes are still “under evaluation.”
Specifically, the auditors called for a mandatory, company-wide process for “to avoid, identify, and address potential sources of bias and discriminatory outcomes when developing or deploying AI and machine learning models” and that it “regularly test existing algorithms and machine-learning models.” Meta said the recommendation is “under evaluation.” Likewise, the audit also recommended “mandatory training on understanding and mitigating sources of bias and discrimination in AI for all teams building algorithms and machine-learning models.” That suggestion is also listed as “under evaluation,” according to Meta.
The company also says some updates related to content moderation are also “under evaluation.” These include a recommendation to improve the “transparency and consistency” of decisions related to moderation appeals, and a recommendation that the company study more aspects of how hate speech spreads, and how it can use that data to address targeted hate more quickly. The auditors also recommended that Meta “disclose additional data” about which users are being targeted with voter suppression on its platform. That recommendation is also “under evaluation.”
The only two recommendations that Meta outright declined were also related to elections and census policies. “The Auditors recommended that all user-generated reports of voter interference be routed to content reviewers to make a determination on whether the content violates our policies, and that an appeals option be added for reported voter interference content,” Meta wrote. But the company said it opted not to make those changes because it would slow down the review process, and because “the vast majority of content reported as voter interference does not violate the company’s policies.”
Separately, Meta also said it’s a “a framework for studying our platforms and identifying opportunities to increase fairness when it comes to race in the United States.” To accomplish this, the company will conduct “off-platform surveys” and analyze its own data using surnames and zip codes.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Credit: Source link
Comments are closed.