By Dennis R. Mortensen | LinkedIn
Much attention has, rightfully, been given to how the AI industry might transmit existing negative biases into the myriad of artificially intelligent systems that are now being built. As has been pointed out in numerous articles and studies, we’re often entirely unaware of the biases that our data inherits, hence the risk of equally unconsciously porting these into any AI we develop.
Here’s how this can work: According to a recent study, names like “Brett” and “Allison” were found by a machine to be more similar to positive words, including words like “love” and “laughter.” Conversely, names like “Alonzo” and “Shaniqua” were more closely related to negative words, such as “cancer” and “failure.” These results were based on a particular type of analysis (embedded analysis) which showed that, to the computer, bias inhered in the data, or more visibly, in words. That’s right, over time, all of our biased human interactions and presumptions attach bias to individual words themselves.