Billions of dollars are flooding into AI. Yet, AI models are already being affected by prejudice, as evidenced by mortgage discrimination toward Black prospective homeowners.
It’s reasonable to ask what role ethics plays in the building of this technology and, perhaps more importantly, where investors fit in as they rush to fund it.
A founder recently told TechCrunch+ that it’s hard to think about ethics when innovation is so rapid: People build systems, then break them, and then edit. So some onus lies on investors to make sure these new technologies are being built by founders with ethics in mind.
To see whether that’s happening, TechCrunch+ spoke with four active investors in the space about how they think about ethics in AI and how founders can be encouraged to think more about biases and doing the right thing.
We’re widening our lens, looking for more investors to participate in TechCrunch surveys, where we poll top professionals about challenges in their industry.
If you’re an investor and would like to participate in future surveys, fill out this form.
Some investors said they tackle this by doing due diligence on a founder’s ethics to help determine whether they’ll continue to make decisions the firm can support.
“Founder empathy is a huge green flag for us,” said Alexis Alston, principal at Lightship Capital. “Such people understand that while we are looking for market returns, we are also looking for our investments to not cause a negative impact on the globe.”
Other investors think that asking hard questions can help separate the wheat from the chaff. “Any technology brings with it unintended consequences, be it bias, reduced human agency, breaches of privacy, or something else,” said Deep Nishar, managing director at General Catalyst. “Our investment process centers around identifying such unintended consequences, discussing them with founding teams, and assessing whether safeguards are or will be in place to mitigate them.”
Government policies are also taking aim at AI: the EU has passed machine learning laws, and the U.S. has introduced plans for an AI task force to start looking at the risks associated with AI. That’s in addition to the AI Bill of Rights introduced last year. And with many top VC firms injecting money into AI efforts in China, it’s important to ask how global ethics within AI can be enforced across borders as well.
Read on to find out how investors are approaching due diligence, the green flags they look for, and their expectations of regulations in AI.
We spoke with:
Alexis Alston, principal, Lightship Capital
When investing in an AI company, how much due diligence do you do on how its AI model purports or handles bias?
For us, it’s important to understand exactly what data the model takes in, where the data comes from and how they’re cleaning it. We do quite a bit of technical diligence with our AI-focused GP to make sure that our models can be trained to mitigate or eliminate bias.
We all remember not being able to have faucets turn on automatically to wash our darker hands, and the times when Google image search “accidentally” equated Black skin with primates. I’ll do everything in my power to make sure we don’t end up with models like that in our portfolio.
How would the U.S. passing machine learning laws similar to the EU’s affect the pace of innovation the country sees in this sector?
Given the lack of technical knowledge and sophistication in our government, I have very little faith in the U.S.’ ability to pass actionable and accurate legislation around machine learning. We have such a long tail when it comes to timely legislation and for technical experts to be a part of task forces to inform our legislators.
I actually don’t see legislation making any major changes in the pace of the development of ML, given how our laws are usually structured. Similarly to the race to the bottom for legislation around designer drugs in the U.S. a decade ago, the legislation never could keep up.
Credit: Source link
Comments are closed.