AI Researchers at Stanford Created a Tool to Put Algorithmic Auditing in the Hands of Impacted Communities

Even with the most recent technological advances, AI has yet to be able to produce conclusions that are completely impartial or 100% ethically correct. When presented with biased search results through social media posts or automatically generated hiring and credit choices, an ordinary man cannot take many actions. The most people can do is express their outrage by boycotting the platform or reporting the incident hoping that those in charge of the algorithm would make the necessary corrections. However, this frequently goes in vain. On the other hand, journalists and researchers have plenty of technical resources at their disposal to make the necessary adjustments. They are capable of analyzing the algorithmic system in an effort to pinpoint the inputs that lead to biased results. Such algorithmic audits can assist impacted communities in holding those who use negative algorithms responsible.

In order to study a more extensive review of the algorithms’ impacts, researchers and professors from Stanford’s Human-Centered Artificial Intelligence (HAI) Lab collaborated with the University of Pennsylvania to lead a study that puts the tools of algorithmic auditing into the hands of regular people, particularly from the impacted communities. The team developed IndieLabel, a web-based tool that enables end-users to audit Perspective API, as a proof of concept for their collaborative study. The main purpose of doing so was to determine whether common people might reveal general systematic statements about what a system was doing incorrectly and identify other bias issues that had not been previously identified.

The research team decided to test their strategy by concentrating solely on Perspective API in a content-moderation environment. Perspective API is a popular content-moderation technique that indicates the degree of text toxicity. Several reputable content providers, including The New York Times and El País, frequently use Perspective API to mark specific content for manual inspection, label it as harmful, or automatically reject it. Furthermore, because Perspective API has already undergone a technical expert audit, it offers a basis for comparing how end-user auditors may approach the auditing process differently from specialists.

Meet Hailo-8™: An AI Processor That Uses Computer Vision For Multi-Camera Multi-Person Re-Identification (Sponsored)

For an entire dataset, IndieLabel simulates the end-user auditor’s perceptions of content toxicity and allows users to delve deeper to observe where Perspective API differs from the auditor. This is where IndieLabel makes a significant distinction because, typically, the model is used as a point of comparison, and user opinions are measured against it. However, in this instance, the user’s opinion is considered a benchmark against which the model is compared. About 20 content examples are first given a 5-point label from “not at all harmful” to “very toxic” by the end-user auditor. While 20 might seem a modest amount, the team demonstrated that it was enough to train a model that anticipates the auditor’s labeling of a considerably bigger dataset. Following the training phase, the auditor can either continue auditing or rate the toxicity of additional samples to enhance their model.

During the audit process, users have the liberty to select a topic area from a dropdown menu or design their own unique topics for auditing. Following that, IndieLabel creates a histogram displaying instances in which the Perspective API’s forecast of toxicity for a topic differs from the user’s perspective. The auditor can review samples to give to the developer and make notes explaining why they are or are not hazardous from the user’s perspective to understand the system’s behavior better.

For the assessment of IndieLabel, the study team enlisted the help of 17 non-technical auditors. Participants in the process both repeated problems that formal audits had previously identified and brought up problems that had previously gone unreported, such as under-flagging of covert acts of hate that support stigma and over-flagging of slurs that marginalized groups have appropriated. Additionally, there were cases where participants’ opinions on the same audit topic differed, such as limiting the use of derogatory terms for people with intellectual disabilities.

The team also emphasizes how important it is for algorithm developers to incorporate end-user audits into a system early on in the algorithm creation process before deployment. It will be crucial for developers to pay much closer attention to the communities their systems are being created for and to deliberate early on how their systems will act in contentious problem areas. This will make it possible to establish a direct feedback loop where complaints are sent immediately to the developer, who can alter the system before any damage is done. For instance, the IndieLabel approach can be modified to examine a social media company’s feed ranking algorithm or a large corporation’s candidate evaluation model.

The team also aims to host end-user audits on external third-party platforms in the near future, which would require acquiring a dataset first. This will make end-user audits accessible to all communities. Even though the procedure would be very time-consuming, it might be required in circumstances when the algorithm developer declines to handle a specific problem. It would still be much better than the prior strategy of delivering an anecdotal complaint that would get lost in the ether, even though this means that developers would now rely on public pressure to implement the adjustments.


Check out the Paper and Reference. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.


Khushboo Gupta is a consulting intern at MarktechPost. She is currently pursuing her B.Tech from the Indian Institute of Technology(IIT), Goa. She is passionate about the fields of Machine Learning, Natural Language Processing and Web Development. She enjoys learning more about the technical field by participating in several challenges.


Credit: Source link

Comments are closed.