As Industries Prep for Accelerated AI Adoption, New Focus on Ethics & Responsibility

By: Pandurang Kamat, CTO of Persistent Systems

Artificial intelligence (AI) at its core is machine learning (ML), the process by which machines learn how to improve performance based on data inputs. The sheer amount of existing data today means that AI has had fertile ground to flourish, accelerating at a breakneck pace in just the past few years. And AI’s promise has gotten technologists and executives in far-reaching industries thinking creatively about its various applications.

From the industrial revolution onwards, any period of rapid technological progress is marked by equal parts enthusiasm and trepidation. The World Wide Web Consortium (W3C), a group that creates web developer industry standards, points out that “there is increasing awareness that the application of machine learning poses risks and can lead to harms” – including in areas like: bias, privacy, security, sustainability, and overall accountability.

“There is clear demand for an ethical approach to ML, seen through activism from civil society, the emergence of ~100 sets of ethical AI principles globally, and government moves all around the world to regulate AI,” the group wrote. “If technologies are not aligned with the values of the societies they operate in, they risk undermining them.”

Choose Wisely

On the surface, it’s tough to argue that enhanced speed and output are bad for business. But for certain industries, there is a lot is at stake with the decision-making process that some argue shouldn’t be left to an algorithm. Looking at a few of the largest industries, we can see just how vulnerable we are to machine error, and why responsible AI is so paramount.

Banking, Financial Services, and Insurance (BFSI)

The World Economic Forum (WEF) and Deloitte examined the process of deploying AI in BFSI companies, finding that, “while AI can help create useful business insights from an overwhelming amount of information,” the “foreignness of AI” will be a source of ongoing serious risk.

“AI reasons in unhuman ways,” the report reads. “AI systems do not follow human constructs of logic, and behave very differently from human actors given the same task. Further, the self-learning nature of AI systems allows them to evolve without human input, which can lead to unexpected outcomes based on unknown variables.”

Advocates for responsible AI contend that machine learning presents serious problems in things like automatic risk scoring, credit, and lending decisions, all of which are historically prejudicial to certain demographics. Recent studies have shown that lenders using AI-based decision engines were more likely to deny home loans to people of color – in some cases, 80% of Black applicants are more likely to be rejected.

Healthcare

A few years ago, the Journal of the American Medical Association (JAMA) took a look at the adoption of AI in healthcare, finding that the challenges are many, including “unrealistic expectations, biased and nonrepresentative data, inadequate prioritization of equity and inclusion, the risk of exacerbating health care disparities, low levels of trust, and uncertain regulatory environments.”

Not exactly a ringing endorsement, but there is a higher bar for responsible AI in healthcare, especially given its literal life-or-death potential. On the whole, healthcare professionals are optimistic but guarded about the future of AI, choosing to focus their efforts on educating other practitioners about its usage in the clinical setting and creating guidelines on how to move forward responsibly in a way that benefits all patients.

Media

PwC estimates that AI will provide $15.7 trillion in global economic growth by 2030, but according to Salesforce fewer than half (48%) of customers trust companies to use AI ethically, and 65% are concerned about unethical use of AI. At the nexus of profits and trust, there are media companies – early adopters with huge reach and influence. And there are concerns over the fact that AI is not a “set it and forget it” application, meaning that for media companies, their responsibility is not just in the production of content, but also its ongoing monitoring post-deployment.

How are ads being served and personalized? How is content reaching a particular audience, and what is the message? Who is teaching the AI “how” to learn? And how can you target specific consumers while also respecting privacy? These are just some of the questions vexing media companies as they balance ethics with gaining a competitive edge.

The Response to Responsibility

Progress in developing responsible AI is evident each day – lawmakers are drafting regulations, consumers are demanding more transparency, and many companies have created specific departments tasked with ensuring their AI is used appropriately. Companies need a well-defined responsible AI framework that covers ethics, transparency, accountability, fairness, privacy and security. As debates and reforms continue, a recent joint study by MIT Sloan Management Review and Boston Consulting Group provides some simple overall guidance – they wrote, “If you want to be a responsible AI leader, focus on being a responsible company.”

Credit: Source link

Comments are closed.