How Artificial Intelligence Is Edging Its Way Into Our Lives

18

Over the weekend, I wrote about Andrew Yang, a former tech executive who has decided to run for president in 2020 as a Democrat on a “beware the robots” platform. He thinks that with innovations like self-driving cars and grocery stores without cashiers just around the corner, we’re about to move into a frightening new era of mass unemployment and social unrest.

So he’s proposing a universal basic income plan called the “Freedom Dividend,” which would give every American adult $1,000 a month to guarantee them a minimum standard of living while they retrain themselves for new kinds of work.

Mr. Yang’s campaign is a long shot, and there are significant hurdles to making universal basic income politically feasible. But the conversation about automation’s social and economic consequences is long overdue. Even if he doesn’t win the election, Mr. Yang may have hit on the next big political wedge issue. — Kevin Roose

Waymo faces hurdles in self-driving cars

Waymo just settled its closely watched lawsuit with Uber over stolen trade secrets. With the settlement, Waymo received a stake in Uber worth $245 million and Uber agreed that no Waymo technology would be used in its own autonomous vehicles.

But Waymo now faces a much bigger fight in autonomous vehicles, which we chronicle here. Uber is just one of many companies now competing with Waymo on driverless cars, and much of this competition is driven by ex-Waymo engineers. Waymo’s chief executive, John Krafcik, is set to take the stage at the New Work Summit on Monday night to discuss the company’s future, including the ride-hailing service it says will soon launch in Arizona. — Cade Metz

Artificial intelligence may be biased

In modern artificial intelligence, data rules. A.I. software is only as smart as the data used to train it, as Steve Lohr recently wrote, and that means that some of the biases in the real world can seep into A.I.

If there are many more white men than black women in the system, for example, it will be worse at identifying the black women. That appears to be the case with some popular commercial facial recognition software.

Joy Buolamwini, a researcher at the M.I.T. Media Lab, found that the software can now tell if a white man in a photograph is male or female 99 percent of the time. But for darker skinned women, it is wrong nearly 35 percent of the time. — Joseph Plambeck

Continue reading the main story

Source