It’s been an odd day for AI chatbots in the news. Did I say “odd?”
Sorry, I meant terrifying.
First, I heard a Clearer Thinking Podcast where Dax Flamehaving ChatGPT make all the decisions for his life for a year.
Then I read about Pierre committing suicide after spending 6-weeks talking to the Eliza chatbot (EleutherAI/GPT-J). With guidance Eliza’s guidance, he killed himself as a sacrifice against climate change.
These and other generative AIs aren’t designed to give good answers. (Of they aren’t “designed” at all, any more than living things in nature are; both are simply evolved. For the AI’s often, we often say “trained.”)
They are trained on commonly available text. They give the most common answer from their training sense. You might call it “the conventional wisdom.” Or perhaps the common thoughts of some particular community.
Those common thoughts are not guaranteed to be sensible. They depend on the time and place of the community. Leeches, human sacrifices, the sun moving around the earth, and hexes causing illness were once the common knowledge and accepted wisdom.
Dax Flame’s personal successes scare me. I fear that hearing of them will give us a lot of Pierres.
Generative AI systems and large language models, in general, are brittle. Even if they give Dax a decade of wonderful advice, there is no guarantee that the next interchange won’t be less than wonderful, perhaps even fatal.