Sean Gourley is the CEO & Founder of Primer, an enterprise that provides industrial-grade Natural Language Processing (NLP) applications for government agencies, financial institutions, Fortune 50 companies, and many other organizations.
One of the primary use cases of Primer is to help people detect, understand, and respond to disinformation campaigns. In an effort to enhance these services Primer recently acquired Yonder to enhance their ability to combat disinformation attacks.
We sat down with Sean to discuss misinformation at the recent Ai4 conference that was held in Las Vegas. Sean is a treasure trove of information regarding misinformation campaigns, and government propaganda. Below is our interview.
What initially attracted you to learning, and understanding the whole misinformation ecosystem?
So, I talked a little about high frequency trading, and it was interesting. I was looking through that. There was this whole kind of space of algorithms, trying to understand if something had happened in the world, and then trade on it. And it was primitive. This was back in 2012, and we were picking up noise.
They were getting things wrong when they were reading it, or misinterpreting signals. And then there was a tweet that came out, purportedly, AP had been hacked. And their Twitter account said, “Breaking: White House explosion, Barack Obama injured.” And then the market took that information and dropped, lost about $130 billion off that tweet. Now I don’t know if it was intended to drive a market, if it was intended just to create havoc, or was intended to do something, but it was just that connection back at that time, I was like, “Wow.” It’s like this butterfly flapping its wings and creating this sort of impact, because it’s kind of leveraging the set of algorithmic actors that they’re really not that smart.
So I saw that, and that got me digging in and then I started looking… And I saw there was a whole bunch of Mexican bots, Mexican accounts that were claiming that journalists were not killed by drug cartels. And this is, like, 2013. And I was like, “Oh my God, you’ve got non-state actors, drug cartels, engaging in this thing to create a narrative battle. And it looks like they’re winning.” And I was like, “These aren’t even state actors.” And at that point sort of said, “Well, what if a state actor gets its hands on this? What would it look like and what would that do?” And then I published a piece in Wired in 2015. It was actually written at end of 2014, but it was about predictions for 2015. And I said, “This is going to be the year that bots are going to take over and control elections.”
And it became incredibly prescient as that piece came through, perhaps even more so than I even originally thought. But it was just seeing this kind of arc of technology. On the same side, we were also seeing just how rudimentary a lot of these technologies were. It was pretty primitive. So all of that started to kind of come together, and then we got this type of space that we ended up in, which is wow, you can control narratives just through repeated exposure to information from the feeds that you’re consuming now.
Do you believe that there has been an election that has been swayed or that a future election may be swayed by this type of misinformation?
Framing this back in a sense here. First is we’ve taken a very narrow gauge at this, which is misinformation isn’t real. And I think that’s too narrow. We need to expand it to who’s controlling narratives. Right now, narrative could very well be real, but it may not be that important.
We can take a story if you’re worried about crime, and we can just massively broadcast all of the crime to all of the channels, and you’ll come back through this and think your city has just gone through the most horrific crime wave that it’s ever seen. And all you’ve done is just taking every single crime that ever was reported, that was previously not on your radar, and put it into your system.
Now you’re going to start believing that crime is much higher than it actually is. It hasn’t changed, but you’re exposed more to it. So this is not whether or not that crime happened, it’s whether or not it was fighting for your attention. So this is one piece. We get caught into this did it happen or not? As though that is the battle. But the battle is really for people’s worldviews. So, that’s one side of it.
The second piece of it here is, well, how much can you influence this stuff? Well, if you don’t have a perception on whether or not vaccines cause you to lose 37 points of IQ, or if you didn’t previously have a belief that vaccines and intelligence were linked. If I told you that they did, the vaccines increased your IQ, you would now be three times more likely to believe that, than if someone came back and told you that vaccines caused your IQ to drop. So whoever comes in with the first message has the best chance of you holding that belief, if you previously don’t have an opinion about something.
So is it bias in the human brain that is vulnerable to this?
It’s bias in the human brain. The second bit, even if you know information to be false, if you’re exposed to it, you’re more likely to believe it, even if you know it to be false. So there’s this repetition of information. There’s the first piece of information when you previously had no opinion on. And then the other piece that we know, is if you have it from independent sources, you trust it more. And if you have it from people that are in your friend network, you trust it more, it becomes an affiliation methodology. And we know that around 25% is a tipping point, if 25% of people start believing something, it creates a massive tipping point whereby it then becomes very likely that 75% or 80% of people will start believing it.
So all of this comes down to the sort of world of opinion formation, and the mechanics of how groups arrive at consensus. There’s a sub-branch of physics, that’s computational physics, that’s been studying this, and you can go into the literature, and understand these models of how opinions form, propagate and are adopted.
You are able to engineer these systems if you so choose, and this is what people are starting to do. I don’t think they’ve been very sophisticated. But look, China’s an AI superpower. They know that winning information wars around the dynamics of Taiwan become incredibly important. We should expect them to take every step the technology allows, to engage and win in that conflict.
We know how humans fall victim to this, but how does an AI detect the root truth?
We need to get away from this thing of what is true, and what I mean by that, is that this is not about bringing fact checkers to everything. Because we said, here, look, an AI could determine that, yes, this crime did happen, but it doesn’t matter. That’s not the thing. You now believe that there’s a massive crime wave in your city and something needs to be done about it. Right? And what is the root truth about whether or not Taiwan belongs to China? I mean, you can go back historically. We can debate that, but does the AI know? Do we know exactly where that is?
We know now it’s a relatively divisive issue. And the US and Taiwan have belief structures. China has another belief structure. The AI’s job is not to figure out who’s right on that. The AI’s job is to determine whether or not someone’s tried to actively influence you to believe one side or the other.
And that’s the second bit, is people will use artificial intelligence to attempt to persuade you, to believe one side or the other, using the methods of getting the first belief, getting continued repetition of exposure to this, getting seemingly independent voices coming through, controlling and choosing 25% of your influence network to hope to get you to flip to believe the other side. These are all techniques that AI can enhance, and AI can also defend against. But we need to get away from this thing of can AI determine the truth, because oftentimes, the truth is perhaps less important than the battle for the narrative.
And how does Primer assist in this?
So primarily it’s on surveillance and detection. One of the things that emerges, is we’re in a place where the first time that we often hear about misinformation campaigns is six months after they happened.
Meaning we are never able to catch this is real-time?
Well, it’s not even real-time. It’s if you put us into the kinetic thing, missiles have moved onto the border of the US, and six months later, we realize that they’ve been fired at us. That’s where we are. We’ve got a massive gap to fill on the surveillance and reconnaissance around misinformation, or information operations we call. If there’s been an information operation that’s been conducted, we need to get real-time, or near real-time understanding of what that looks like, what the motivations are, what the manipulations are, and put that in a place where we can start to either act to shut down those bot networks, or we can look to start limiting the spread of that information.
And what’s your view on the ultimate threat that China plays in the world?
It’s a big question for in the world. What we do know is that we are in an AI arms race with China, because the winner of that is going to have a very dominant military advantage over whoever is second place in that race. So China is going to pursue the components of artificial intelligence very aggressively. The second piece of that is the prominence of chip manufacturing, and advanced GPUs which Taiwan has. And I think if China is looking at that, and they could have the fabrication labs, and facilities, and foundries of Taiwan, they would be in a very, very dominant position in an AI arms race.
Now, if China takes that by force, there’s a good chance that they end up destroying those capabilities, or they’re destroyed by Taiwan and no one has them. Which would be fine in some level for China. There’d be huge disruption globally, but it wouldn’t necessarily create as much of an issue for China if that did happen.
However, if they can take it whole, then that’s a massive advantage for them. And the way you do that is as we’ve seen in Hong Kong, is how, you know, “You’re really part of us.” And you convince the rest of the world that they’re really part of us. And if you try and dissent against that belief or opinion, then there’s going to be economic or other issues. So, what China is engaged in is going to be a global influence operation to convince the world that Taiwan is part of China, to convince Taiwanese people that they’re part of China. And if they can do that, then all of a sudden China has the cake and can eat it too.
So where we’re going to see that, is China’s going to project narrative information operations to convince the world that Taiwan is part of China. And this is being recognized by the House Intelligence Committee. As they’ve put together their Intelligence Authorization Act, they’ve called out, specifically, methods for detecting Chinese influence operations in the Caribbean, South America, Central America.
That’s a specific call out for the intelligence community to kind of work on this. The Intelligence Authorization Act also called out the ability to adopt artificial intelligence, the ability to use commercial off the shelf technologies, and the ability to deploy no code environments into these organizations. There’s been some really constructive stuff coming off of the Hill. Some pieces are open and available to read, have been very interesting about combating Chinese misinformation campaigns, particularly in countries in South America, Central America and the Caribbean.
How much of a role do you think recommender engines like YouTube, Facebook, Twitter and TikTok have to play in the amplification of this false narrative?
One thing we know about China is they stopped Facebook from coming in. Facebook went into China. In 2012 Mark Zuckerberg was like, “I’m going to learn Mandarin. It’s all about China.” China was like, “We’ll let you in and now goodbye.”
Because China was like, “Well, to hell that we are going to let an American company control the information feeds of our population. We’ve worked very hard, thank you very much, to control the information that our population gets. This is what we do. We’re very good at that. We’ll let you in, we’ll learn from you. But after that, you’re gone.” So now Facebook is not there.
So the CCP (Chinese Communist Party) made a very clear decision around how it wanted to control information. Now, as an authoritarian regime, you have that control. “You’re not going to think these ideas. You’re not going to say these things. You’re not going to do that.” As a liberal democracy, we have to engage in the ability of the debate, and the exchange of ideas, in a relatively free and open space. There’s certain limits, but they are right at the edges. So China, I think, was able to control information. The US can’t to the same extent, there’s a democracy. We’ve got to let the messiness of information fight itself out. Which is why we’ve become susceptible to these types of information attacks. There’s a big asymmetry in the system. “You want to enable freedom of discussion? Well, we’re going to corrupt that. You can’t do that to us because we’ve got very good lockdowns of our information systems.”
I think that’s something that we’re going to be wrestling with through this. And its first kind of protocol, and that is quickly identify if their information operation is being run on the platform, alert that to the systems, the very least is to limit the kind of diffusion of that information, but also, if there are non-bot networks, take them down quickly. The recommendation side of this… This hasn’t, I’d say, been used maliciously by US companies. I don’t think that’s where that is. Could it be used maliciously by an external company? Yeah, potentially. If you were at war with someone, and they had control of an information network that was feeding narratives to your population, would you trust them to be fair and unbiased?
Well, we know already what’s happened with VK and Russia, which is the social network. They said, “Well, you’re not going to put any images of dead people that we’ve killed in Ukraine. That’s just banned. And we’re going to put up an information wall to stop the diffusion of information from the western networks into our networks.” So what happens when a conflict starts, is people put up barriers that says, “You’re not going to have the influence control of our population.”
So I think the textbook thing, it took maybe a three or four weeks for VK to do that. There was a window where I think it was relatively open, but then they shut that down, because they realized if you don’t control the narrative in your population, and if you let them see the damage that the missiles and the attacks are having on the civilian population in Ukraine, maybe they don’t want to fight anymore.
I think one of the things we’re all surprised about with Ukraine was how strong they came back and forth. And I don’t think we were wrong in thinking that it could have been a very quick war, if people from Ukraine put down their guns. But what happened was the opposite. They came back into the country, picked up the Kalashnikovs and started fighting. And that was because the narrative flipped, and it flipped on some key things. The Ghost of Kiev, Snake Island, the woman with the sunflower seeds in the pockets of the soldier and saying, “When you die, the sunflowers will come.” These moments, the memes, they resonated through that population.
Now a lot of them weren’t actually true. Or they were manufactured. The Ghost of Kiev and Snake Island was amplified. They survived, but they were cast as martyrs that said, “F*** you,” and got killed. This was an information war par excellence that created a fighting spirit, not only in Ukraine, but also in the NATO member countries, and the US to have Ukraine flags flying around and saying, “We need to care about this.” That didn’t have to be the case. The easiest war to win is one where the opposition doesn’t want to fight. And that was the bet and the gamble that the Russians made, and Putin made, and he was wrong. And he was wrong because we’d never seen an influence operation that had come back to win a narrative war like we’d ever seen with Ukraine before. And it was excellent.
I think we underestimate the power of information operations. I think China’s been paying very close attention and I think we’re going to see that play out. And we are seeing it play out, vis a vis Taiwan, and what the rest of the world thinks about that. And the US, I think if it wants to engage, it needs to have the same set of skills, to combat information operations.
Are there any last words that you wanted to share about Primer?
We are engaged in this mission to help support the US and its allies to bring the best technology to put in the war fighter’s hands, and I think we want more technology companies to come and join this fight.
Thank you for this enlightening interview, readers who wish to learn more are urged to visit Primer.
Credit: Source link
Comments are closed.