Arjun Narayan, is the Head of Global Trust and Safety for SmartNews a news aggregator app, he is also an AI ethics, and tech policy expert. SmartNews uses AI and a human editorial team as it aggregates news for readers.
You were instrumental in helping to Establish Google’s Trust & Safety Asia Pacific hub in Singapore, what were some key lessons that you learned from this experience?
When building Trust and Safety teams country-level expertise is critical because abuse is very different based on the country you’re regulating. For example, the way Google products were abused in Japan was different than how they were abused in Southeast Asia and India. This means abuse vectors are very different depending on who’s abusing, and what country you’re based in; so there’s no homogeneity. This was something we learned early.
I also learned that cultural diversity is incredibly important when building Trust and Safety teams abroad. At Google, we ensured there was enough cultural diversity and understanding within the people we hired. We were looking for people with specific domain expertise, but also for language and market expertise.
I also found cultural immersion to be incredibly important. When we’re building Trust and Safety teams across borders, we needed to ensure our engineering and business teams could immerse themselves. This helps ensure everyone is closer to the issues we were trying to manage. To do this, we did quarterly immersion sessions with key personnel, and that helped raise everyone’s cultural IQs.
Finally, cross-cultural comprehension was so important. I managed a team in Japan, Australia, India, and Southeast Asia, and the way in which they interacted was wildly different. As a leader, you want to ensure everyone can find their voice. Ultimately, this is all designed to build a high-performance team that can execute sensitive tasks like Trust and Safety.
Previously, you were also on the Trust & Safety team with ByteDance for the TikTok application, how are videos that are often shorter than one minute monitored effectively for safety?
I want to reframe this question a bit, because it doesn’t really matter whether a video is short or long form. That isn’t a factor when we evaluate video safety, and length doesn’t have real weight on whether a video can spread abuse.
When I think of abuse, I think of abuse as “issues.” What are some of the issues users are vulnerable to? Misinformation? Disinformation? Whether that video is 1 minute or 1 hour, there is still misinformation being shared and the level of abuse remains comparable.
Depending on the issue type, you start to think through policy enforcement and safety guardrails and how you can protect vulnerable users. As an example, let’s say there’s a video of someone committing self-harm. When we receive notification this video exists, one must act with urgency, because someone could lose a life. We depend a lot on machine learning to do this type of detection. The first move is to always contact authorities to try and save that life, nothing is more important. From there, we aim to suspend the video, livestream, or whatever format in which it is being shared. We need to ensure we’re minimizing exposure to that kind of harmful content ASAP.
Likewise, if it’s hate speech, there are different ways to unpack that. Or in the case of bullying and harassment, it really depends on the issue type, and depending on that, we’d tweak our enforcement options and safety guardrails. Another example of a good safety guardrail was that we implemented machine learning that could detect when someone writes something inappropriate in the comments and provide a prompt to make them think twice before posting that comment. We wouldn’t stop them necessarily, but our hope was that people would think twice before sharing something mean.
It comes down to a combination of machine learning and keyword rules. But, when it comes to livestreams, we also had human moderators reviewing those streams that were flagged by AI so they could report immediately and implement protocols. Because they’re happening in real time, it’s not enough to rely on users to report, so we need to have humans monitoring in real-time.
Since 2021, you’ve been the Head of Trust, Safety, and Customer experience at SmartNews, a news aggregator app. Could you discuss how SmartNews leverages machine learning and natural language processing to identify and prioritize high-quality news content?
The central concept is that we have certain “rules” or machine learning technology that can parse an article or advertisement and understand what that article is about.
Whenever there is something that violates our “rules”, let’s say something is factually incorrect or misleading, we have machine learning flag that content to a human reviewer on our editorial team. At that stage, a they understand our editorial values and can quickly review the article and make a judgement about its appropriateness or quality. From there, actions are taken to address it.
How does SmartNews use AI to ensure the platform is safe, inclusive, and objective?
SmartNews was founded on the premise that hyper-personalization is good for the ego but is also polarizing us all by reinforcing biases and putting people in a filter bubble.
The way in which SmartNews uses AI is a little different because we’re not exclusively optimizing for engagement. Our algorithm wants to understand you, but it’s not necessarily hyper-personalizing to your taste. That’s because we believe in broadening perspectives. Our AI engine will introduce you to concepts and articles beyond adjacent concepts.
The idea is that there are things people need to know in the public interest, and there are things people need to know to broaden their scope. The balance we try to strike is to provide these contextual analyses without being big-brotherly. Sometimes people won’t like the things our algorithm puts in their feed. When that happens, people can choose to not read that article. However, we are proud of the AI engine’s ability to promote serendipity, curiosity, whatever you want to call it.
On the safety side of things, SmartNews has something called a “Publisher Score,” this is an algorithm designed to constantly evaluate whether a publisher is safe or not. Ultimately, we want to establish whether a publisher has an authoritative voice. As an example, we can all collectively agree ESPN is an authority on sports. But, if you’re a random blog copying ESPN content, we need to ensure that ESPN is ranking higher than that random blog. The publisher score also considers factors like originality, when articles were posted, what user reviews look like, etc. It’s ultimately a spectrum of many factors we consider.
One thing that trumps everything is “What does a user want to read?” If a user wants to view clickbait articles, we won’t stop them if it isn’t illegal or breaks our guidelines. We don’t impose on the user, but if something is unsafe or inappropriate, we have our due diligence before something hits the feed.
What are your views on journalists using generative AI to assist them with producing content?
I believe this question is an ethical one, and something we’re currently debating here at SmartNews. How should SmartNews view publishers submitting content formed by generative AI instead of journalists writing it up?
I believe that train has officially left the station. Today, journalists are using AI to augment their writing. It’s a function of scale, we don’t have the time in the world to produce articles at a commercially viable rate, especially as news organizations continue to cut staff. The question then becomes, how much creativity goes into this? Is the article polished by the journalist? Or is the journalist completely reliant?
At this juncture, generative AI is not able to write articles on breaking news events because there’s no training data for it. However, it can still give you a pretty good generic template to do so. As an example, school shootings are so common, we could assume that generative AI could give a journalist a prompt on school shootings and a journalist could insert the school that was affected to receive a complete template.
From my standpoint working with SmartNews, there are two principles I think are worth considering. Firstly, we want publishers to be up front in telling us when content was generated by AI, and we want to label it as such. This way when people are reading the article, they’re not misled about who wrote the article. This is transparency at the highest order.
Secondly, we want that article to be factually correct. We know that generative AI tends to make things up when it wants, and any article written by generative AI needs to be proofread by a journalist or editorial staff.
You’ve previously argued for tech platforms to unite and create common standards to fight digital toxicity, how important of an issue is this?
I believe this issue is of critical importance, not just for companies to operate ethically, but to maintain a level of dignity and civility. In my opinion, platforms should come together and develop certain standards to maintain this humanity. As an example, no one should ever be encouraged to take their own life, but in some situations, we find this type of abuse on platforms, and I believe that is something companies should come together to protect against.
Ultimately, when it comes to problems of humanity, there shouldn’t be competition. There shouldn’t even necessarily be competition on who is the cleanest or safest community—we should all aim to ensure our users feel safe and understood. Let’s compete on features, not exploitation.
What are some ways that digital companies can work together?
Companies should come together when there are shared values and the possibility of collaboration. There are always spaces where there is intersectionality across companies and industries, especially when it comes to fighting abuse, ensuring civility in platforms, or reducing polarization. These are moments when companies should be working together.
There is of course a commercial angle with competition, and typically competition is good. It helps ensure strength and differentiation across companies and delivers solutions with a level of efficacy monopolies cannot guarantee.
But, when it comes to protecting users, or promoting civility, or reducing abuse vectors, these are topics which are core to us preserving the free world. These are things we need to do to ensure we protect what is sacred to us, and our humanity. In my opinion, all platforms have a responsibility to collaborate in defense of human values and the values that make us a free world.
What are your current views on responsible AI?
We’re at the beginning of something very pervasive in our lives. This next phase of generative AI is a problem that we don’t fully understand, or can only partially comprehend at this juncture.
When it comes to responsible AI, it’s so incredibly important that we develop strong guardrails, or else we may end up with a Frankenstein monster of Generative AI technologies. We need to spend the time thinking through everything that could go wrong. Whether that is bias creeping into the algorithms, or large language models themselves being used by the wrong people to do nefarious acts.
The technology itself isn’t good or bad, but it can be used by bad people to do bad things. This is why investing the time and resources in AI ethicists to do adversarial testing to understand the design faults is so critical. This will help us understand how to prevent abuse, and I think that’s probably the most important aspect of responsible AI.
Because AI can’t yet think for itself, we need smart people who can build these defaults when AI is being programmed. The important aspect to consider right now is timing – we need these positive actors doing these things NOW before it’s too late.
Unlike other systems we’ve designed and built in the past, AI is different because it can iterate and learn on its own, so if you don’t set up strong guardrails on what and how it’s learning, we cannot control what it might become.
Right now, we’re seeing some big companies laying off ethics boards and responsible AI teams as part of major layoffs. It remains to be seen how seriously these tech majors are taking the technology and how seriously they’re reviewing the potential downfalls of AI in their decision making.
Is there anything else that you would like to share about your work with Smartnews?
I joined SmartNews because I believe in its mission, the mission has a certain purity to it. I strongly believe the world is becoming more polarized, and there isn’t enough media literacy today to help combat that trend.
Unfortunately, there are too many people who take WhatsApp messages as gospel and believe them at face value. That can lead to tremendous consequences, including—and especially—violence. This all boils down to people not understanding what they can and cannot believe.
If we do not educate people, or inform them on how to make decisions on the trustworthiness of what they’re consuming. If we do not introduce media literacy levels to discern between news and fake news, we’ll continue to advocate the problem and increase the issues history has taught us not to do.
One of the most important components of my work at SmartNews is to help reduce polarization in the world. I want to fulfill the founder’s mission to improve media literacy where they can understand what they’re consuming and make informed opinions about the world and the many diverse perspectives.
Thank you for the great interview, readers who wish to learn more or want to try out a different type of news app should visit SmartNews.
Credit: Source link
Comments are closed.