Using Artificial intelligence to program humans to behave better

By Dennis R. Mortensen | LinkedIn

Much attention has, rightfully, been given to how the AI industry might transmit existing negative biases into the myriad of artificially intelligent systems that are now being built. As has been pointed out in numerous articles and studies, we’re often entirely unaware of the biases that our data inherits, hence the risk of equally unconsciously porting these into any AI we develop.

Here’s how this can work: According to a recent study, names like “Brett” and “Allison” were found by a machine to be more similar to positive words, including words like “love” and “laughter.” Conversely, names like “Alonzo” and “Shaniqua” were more closely related to negative words, such as “cancer” and “failure.” These results were based on a particular type of analysis (embedded analysis) which showed that, to the computer, bias inhered in the data, or more visibly, in words. That’s right, over time, all of our biased human interactions and presumptions attach bias to individual words themselves.

Continued:

Share this!

Additional Articles

News Categories

Get Our Twice Weekly Newsletter!

* indicates required

Rose Law Group pc values “outrageous client service.” We pride ourselves on hyper-responsiveness to our clients’ needs and an extraordinary record of success in achieving our clients’ goals. We know we get results and our list of outstanding clients speaks to the quality of our work.

PRTA suspends operations

(Disclosure: Rose Law Group represents a coalition of property and business owners throughout Pinal County who have worked to bring new transportation infrastructure to the

Read More »