artist mannequin 
Photo by Brett Jordan on Unsplash
in

The Kindness Bias

What should GPT be like?

Matthew Gault, in his Vice article, describes concerns over biases in ChatGPT along with the confusion as to what is training biases versus what limits are set up on the UI.

For textual systems, and perhaps most others, we are going to end up with biases. In the absence of any conscious choice to the contrary, that basis will be toward what is “common,” and that might be anti-truth bias in a world where “The Truth Is Paywalled, But The Lies Are Free.”

A lot of the freely available text is social media that is filled with, if not hate, at least unkindness.

So if we train our AI systems on what is loud and common rather than what is good and true, it will forever infuse its answers with falsehoods and resentment. Harmful things that we will tacitly accept as it comes packaged with desirable functionality.

It’s not too soon to ask for our AI systems to be biased toward truth and kindness. Perhaps we could then ask the same of our bloggers, leader, and media personalities to follow suit.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Written by Russell Brand

Russell has started three successful companies, one of which helped agencies of the federal government become very early adopters of open source software, long before that term was coined. His first project saved The American taxpayer 250 million dollars. In his work within federal agency, he was often called, “the arbiter of truth,” facilitating historically hostile groups and factions to effectively work together towards common goals

silhouette of 2 people standing on grass field during sunset 

The Truth Is Paywalled But The Lies Are Free

white and black analog clock 

The World’s Scariest Clock