Self-Driven Cars Are More Likely To Run Over Black People, Reveals Study

Self-Driven Cars Are More Likely To Run Over Black People, Reveals Study

While we know that self-driven cars have their own problems, inability to detect dark-skinned people just become one the serious problems of the new technology.

While people have been seeing a lot of promising developments in the world of technology, some might say that it is far from evolved - after seeing a recent study's findings come to light.

The technology and the 'artificial intelligence' used in self-driving cars might lead to more death of black people on the road. Now some of the drivers could simply think this as a glitch, but there is way more to it. 



 

 

According to The Independent, a new study has found that the technology used in self-driving cars has a racial-bias that makes autonomous vehicles more likely to drive into black people. 

Researchers at the Georgia University of Technology has found that state-of-the-art detection systems, such as the sensors and cameras used in self-driving cars, are better at detecting people with lighter skin tones. 



 

 

If it were to be interpreted in lay terms, these systems-driven cars would have significantly lesser chances of spotting black people and coming to a stop before crashing into them. 
While the nation's intelligentsia is already grimacing over the cases of institutional racism and bias, this one problem is being classified as algorithmic bias. 

 



 

 

The authors of the study started out with a simple question: How accurately do state-of-the-art object-detection models, like those used by self-driving cars, detect people from different demographic groups?

To find out, they looked at a large dataset of images that contain pedestrians. They divided up the people using the Fitzpatrick scale, a system for classifying human skin tones from light to dark.



 

 

The researchers then analyzed how often the models correctly detected the presence of people in the light-skinned group versus how often they got it right with people in the dark-skinned group.

The researchers said they undertook the study after observing higher error rates for certain demographics by such systems. 



 

 

The results that came out were far from funny, and are likely to spark a lot of problems if they are not corrected. Tests on eight image-recognition systems found this bias held true, with their accuracy proving five percent less accurate on average for people with darker skin. 

Furthermore, while it is natural to assume that the bias might have more effect during night-time, but the average accuracy remained that the same even when the time of the day was changed.

Also, obstructing the image-detection systems view yielded the same result. 



 

 

"We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models," the study concluded. 

AI researcher Kate Crawford, who was not involved in the study, highlighted the dangers of such systems if these issues are not addressed by the companies developing self-driving cars. 

 



 

 

“Pedestrian deaths by self-driving cars are already here – but they're not evenly distributed,” she tweeted. 

Other AI experts responded to her tweet by highlighting that the paper did not use datasets used by autonomous vehicle developers, so may not reflect the actual accuracy of real-world systems.

“In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers,” she responded. 



 

 

“The main takeaway from our work is that vision systems that share common structures to the ones we tested should be looked at more closely,” Jamie Morgenstern, one of the authors of the study told Vex. 

According to a report by Vex, the study's insights show that an increasing amount of evidence is coming to light - which is showing how human-bias can creep into the automated decision-making systems, often called as algorithmic bias. 



 

 

The most famous example came to light in 2015, when Google’s image-recognition system labeled African Americans as “gorillas.”

Three years later, Amazon’s Recognition system drew criticism for matching 28 members of Congress to criminal mugshots.

 Another study found that three facial-recognition systems — IBM, Microsoft, and China’s Megvii — were more likely to misidentify the gender of dark-skinned people (especially women) than of light-skinned people. 

Recommended for you