In June 2020, taking advantage of a Chicago law requiring ride-hailing apps to disclose their prices, researchers from George Washington University published an analysis of algorithms used by ride-sharing startups like Uber and Lyft to set fares. It spotlighted evidence that the algorithms charged riders living in buildings with older, lower-income and less-educated populations more than those who hailed from affluent areas, an effect the researchers pegged on the high popularity of — and thus the high demand for — ride-sharing in richer neighborhoods.
Uber and Lyft rejected the study’s findings, claiming that there were flaws in the methodology. But it was hardly the first study to identify troubling inconsistencies in the apps’ algorithmic decision-making.
Riders aren’t the only ones to be victimized by routing and pricing algorithms. Uber recently faced criticism for implementing “upfront fares” for drivers, which leverages an algorithm to calculate fares in advance using factors that aren’t always in drivers’ favor.
In the delivery space, Amazon’s routing system reportedly encourages drivers to make dangerous on-the-road decisions in pursuit of shorter delivery windows. Meanwhile, apps like DoorDash and Instacart employ algorithms to calculate pay for couriers — algorithms that some delivery people claim have made it harder to predict and figure out their earnings.
As experts like Amos Toh, a senior researcher for Human Rights Watch who studies the effects of AI and algorithms on gig work, note, the more opaque the algorithms, the more regulators and the public have a hard time holding companies accountable.
Credit: Source link
Comments are closed.