In Part 1 of this series, we discussed sample bias, measurement bias, and prejudice bias in Artificial Intelligence. From the beginning, ZeroEyes has set out to eliminate AI bias from our gun detection platform by focusing on object detection (in this case, weapons of all shapes and sizes), not facial or speech recognition.
The ZeroEyes Fairness Report
ZeroEyes works to eliminate sample and measurement bias from our platform through extensive model training and testing. We record live images and video feeds that show hundreds of types of weapons, then meticulously annotate hundreds of thousands of images to train our AI models to detect real-world examples of these visibly brandished guns.
We also train our models to detect in a variety of environments, including during daytime, nighttime and low-light environments such as parking garages and dark buildings.
What’s more, by focusing on object detection rather than detection of human elements, ZeroEyes has driven to remove prejudice bias from our AI platform training. Since our AI detects visible, brandished guns and not the person holding the gun, we evade privacy and bias concerns that plague other AI models.
However, because we know bias can creep into AI platforms unintentionally, ZeroEyes recently conducted its own study, in partnership with a large commercial retailer, to study ethical bias in our proprietary AI object (gun) detection software, DeepZero™.
The “Fairness Report” analyzed the frequency and distribution of human “false positives” from that system; that is, how often the AI thought it was detecting an object (a visible, brandished gun) versus detecting human beings of various skin tones.
The Methods and Results of our Study
Through our Fairness Report, we set forth to examine specific questions related to possible biases in our AI platform, including:
- Are the networks learning components of guns, or components of hands holding objects?
- Are false-positive detections more common based on the “luminance” of an object or person?
- What methods can we use to determine if our network has formed significant biases?
- What conclusions can we draw from the study and apply to future applications?
To begin, we created a statement of the problem.
“Have the networks developed by ZeroEyes for handheld weapon detection learned to incorrectly base detections on the presence of a human hand instead of a firearm, and are these false positive detections biased with regards to the race/ethnicity of the person?”
Then, we collected and analyzed data:
- To determine the existence and extent of any demographic bias, the ZeroEyes Annotation Team labeled over 300,000 data points in 29,817 false positive detection images received from a sample of 11 commercial buildings, shopping malls, and schools.
- The result: only 183 people were detected as false positives out of the total 278,695 people present in the analyzed frames, meaning that only 0.0656% of false positives were humans detected as guns.
- Based on the analysis of the false positive dataset, the chances that humans of varying skin tones were detected as objects were statistically insignificant when compared to how well the AI detected non-human objects as weapons. That’s because the DeepZero™ AI favors object dimensions, or size and shape of the weapon, over object luminance (lightness or darkness present in either objects or skin tones).
Of course, improvements to AI software are never-ending. As we continue to eliminate AI bias in our models, future applications of our findings include: improving data diversity to increase the range of the luminance value of detections, improved data augmentation, and increased model optimization to improve accuracy.
Learn more about how our proprietary AI gun detection technology is developed and applied in schools, commercial businesses, and government facilities across America.