Matthew Gault, in his Vice article, describes concerns over biases in ChatGPT along with the confusion as to what is training biases versus what limits are set up on the UI.
For textual systems, and perhaps most others, we are going to end up with biases. In the absence of any conscious choice to the contrary, that basis will be toward what is “common,” and that might be anti-truth bias in a world where “The Truth Is Paywalled, But The Lies Are Free.”
A lot of the freely available text is social media that is filled with, if not hate, at least unkindness.
So if we train our AI systems on what is loud and common rather than what is good and true, it will forever infuse its answers with falsehoods and resentment. Harmful things that we will tacitly accept as it comes packaged with desirable functionality.
It’s not too soon to ask for our AI systems to be biased toward truth and kindness. Perhaps we could then ask the same of our bloggers, leader, and media personalities to follow suit.