While we know that self-driven cars have their own problems, inability to detect dark-skinned people just become one the serious problems of the new technology.
While people have been seeing a lot of promising developments in the world of technology, some might say that it is far from evolved - after seeing a recent study's findings come to light.
The technology and the 'artificial intelligence' used in self-driving cars might lead to more death of black people on the road. Now some of the drivers could simply think this as a glitch, but there is way more to it.
Researchers at the Georgia University of Technology has found that state-of-the-art detection systems, such as the sensors and cameras used in self-driving cars, are better at detecting people with lighter skin tones.
If it were to be interpreted in lay terms, these systems-driven cars would have significantly lesser chances of spotting black people and coming to a stop before crashing into them.
While the nation's intelligentsia is already grimacing over the cases of institutional racism and bias, this one problem is being classified as algorithmic bias.
https://t.co/uwd1GwTBhK and who out there thinks that the makers can fix this tiny little ? i wonder what the excuse will be when they begin to run over black people- "my car didn't see him"...— h (@harlod5) 7 March 2019
The authors of the study started out with a simple question: How accurately do state-of-the-art object-detection models, like those used by self-driving cars, detect people from different demographic groups?
To find out, they looked at a large dataset of images that contain pedestrians. They divided up the people using the Fitzpatrick scale, a system for classifying human skin tones from light to dark.
Guess what? Study shows that self-driving cars are better at detecting pedestrians with lighter skin tones.— Kate Crawford (@katecrawford) 28 February 2019
Translation: Pedestrian deaths by self-driving cars are already here - but they're not evenly distributed. https://t.co/4LIwTRJKJQ
The researchers then analyzed how often the models correctly detected the presence of people in the light-skinned group versus how often they got it right with people in the dark-skinned group.
The researchers said they undertook the study after observing higher error rates for certain demographics by such systems.
That's not an accurate summary of the paper, unless the researchers got their hands on the actual models used by a self-driving car, rather than the ones from academic papers using similar techniques?— Amber Yust (@Aiiane) 28 February 2019
Training data is one of the largest differentiating factors in the industry.
The results that came out were far from funny, and are likely to spark a lot of problems if they are not corrected. Tests on eight image-recognition systems found this bias held true, with their accuracy proving five percent less accurate on average for people with darker skin.
Furthermore, while it is natural to assume that the bias might have more effect during night-time, but the average accuracy remained that the same even when the time of the day was changed.
Also, obstructing the image-detection systems view yielded the same result.
I agree - in an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers. But given those are never made available (a problem in itself), papers like these offer strong insights into very real risks.— Kate Crawford (@katecrawford) 28 February 2019
"We hope this study provides compelling evidence of the real problem that may arise if this source of capture bias is not considered before deploying these sort of recognition models," the study concluded.
AI researcher Kate Crawford, who was not involved in the study, highlighted the dangers of such systems if these issues are not addressed by the companies developing self-driving cars.
“Pedestrian deaths by self-driving cars are already here – but they're not evenly distributed,” she tweeted.
Other AI experts responded to her tweet by highlighting that the paper did not use datasets used by autonomous vehicle developers, so may not reflect the actual accuracy of real-world systems.
“In an ideal world, academics would be testing the actual models and training sets used by autonomous car manufacturers,” she responded.
I think it means the self driving car is more likely to literally hit black people.. not them buy them over white people— Ash (@amarreola1214) 7 March 2019
“The main takeaway from our work is that vision systems that share common structures to the ones we tested should be looked at more closely,” Jamie Morgenstern, one of the authors of the study told Vex.
According to a report by Vex, the study's insights show that an increasing amount of evidence is coming to light - which is showing how human-bias can creep into the automated decision-making systems, often called as algorithmic bias.
If these self driving cars are programmed to hit Black people, I’m curious what these robots they keep creating will do.— thickums 💋 (@Kim_Khandashisa) 7 March 2019
The most famous example came to light in 2015, when Google’s image-recognition system labeled African Americans as “gorillas.”
Three years later, Amazon’s Recognition system drew criticism for matching 28 members of Congress to criminal mugshots.
Another study found that three facial-recognition systems — IBM, Microsoft, and China’s Megvii — were more likely to misidentify the gender of dark-skinned people (especially women) than of light-skinned people.