AI Can Combat Misinformation and Bias in News

Misinformation and bias in news has become one of the greatest challenges society has faced in recent years. Things like information operations, deepfakes, financial influence, and the exploitation of social tensions are new tools used by both individuals and state actors across the globe. 

Misinformation is a general term for false information, and it is playing a major role in the manipulation of public opinion. At the same time, it is used as an attack against political leaders and is worsening separations between groups in various countries. More so than ever, misinformation is hitting the voting booth, and it has become incredibly effective. 

Disinformation, which is slightly different from misinformation, is deliberately misleading or biased information. It can include manipulated narratives and facts. In other words, it is propaganda. 

A 2019 report by researchers at Oxford University detailed organized disinformation campaigns in 70 different countries, including Russia, China, Vietnam, and Guatemala. 

The digital economy has led to an increase in misinformation and bias in news, with AI technologies and social media platforms becoming ways to push it out to the masses. 

We have seen many examples of this. For example, the previous two U.S. presidential elections were full of misinformation and bias, and this is sure to continue in the future. Financial markets have also been subject to short-term manipulation through misinformation and bias, and foreign affairs are becoming more tense as a result of their use. Social tensions are brought on by false information against supporters and opponents of specific causes, and fake news is rampant in nearly every nation.

Our reliance on social media and algorithms has made us incredibly vulnerable to misinformation and bias, and it has become easy to manipulate large groups of people. 

According to an essay titled “Biases Make People Vulnerable to Misinformation Spread by Social Media,” published by The Conversation, a network of not-for-profit media outlets, there are three types of bias that make the social media ecosystem vulnerable to intentional and accidental misinformation. It’s important to recognize that most of our news now comes from social media. 

The three types of bias are: 

  1. Bias in the Brain: “Cognitive biases originate in the way the brain processes the information that every person encounters every day.”
  2. Bias in Society: “When people connect directly with their peers, the social biases that guide their selection of friends come to influence the information they see.
  3. Bias in the Machine: “The third group of biases arises directly from the algorithms used to determine what people see online. Both social media platforms and search engines employ them.” 

This combination of biases has created the perfect environment for bad actors to use our current news platforms as a way to spread misinformation. But it’s important to recognize that bias is also present outside this realm. Regular news reports and articles risk being biased due to an author’s or publication’s own particular leanings. This is nothing new. 

AI Solutions for Misinformation and Bias

The rise of artificial intelligence has provided us with many new and innovative tools that can be used to combat misinformation and bias in the news.

For example, when designed properly, AI can effectively detect and remove certain misinformation and other problematic content online. It is already being used to identify fake bots by companies like Google, Twitter, and Facebook. And fake bots are often used to spread misinformation and biased news. 

New AI frameworks must be explored to combat this problem, and existing ones must be improved as many have proven to be problematic. These problems begin to get even more dangerous when conflicts arise. For example, military conflicts, such as the current Russia-Ukraine conflict, leads to the dissemination of misinformation at an incredibly fast rate. 

There are various AI-powered tools coming down the pipeline. For example, NT CONNECT, an international technology developer, has recently announced the launch of its news aggregator application, NOOZ.AI. The AI-powered language analysis engine aims to bring transparency to the polarizing bias found throughout today’s news media. It attempts to achieve this by keeping readers aware of news media’s influence prior to reading an article. 

“Authors or journalists tend to lean toward a particular bias – often without the reader’s knowledge,” the company’s press release says. “By knowing the author and news sources’ historical bias, consumers can examine the article with a more objective mind and resist being manipulated to think in a certain way about any particular subject.” 

The NOOZ.AI tool is build upon four key pillars: 

  • Opinion Analysis: It helps readers determine the presence of personal feelings, views, beliefs, or judgments in a journalist’s writing.
  • Sentiment Analysis: The tool uses sentiment analysis to help readers gauge a writer’s support or opposition to the general news content or specific topic.
  • Propaganda Analysis: It can detect potential disinformation by identifying the usage of up to 18 possible persuasion techniques.
  • Revision Analysis: Readers can investigate the evolution of a news story and its manipulation of opinion and sentiment over time. 

The app provides scores for each article, with opinion scores ranging from 0 to 100 and sentiment scores ranging from -100 to 100. It also tracks all text edits for each revision and gives readers the ability to compare different versions. 

According to Garry Paxinos, CTO at NT CONNECT: 

“Biased reporting is a major concern in today’s society. Misinformation is the most commonly (mis)used buzzword.  But the reality is that language itself can be used to manipulate and persuade the casual reader.  By objectively analyzing language we avoid the subjective bias found in other approaches. As opposed to subjectively labeling misinformation, our approach is to illuminate how one can be manipulated with exaggerated headlines, revision history, and text of articles.”

Credit: Source link

Comments are closed.