Made with https://chatgpt.com/g/g-pOP5JGs63-diverse-image-creator
in

Sharing Information with AIs (www.linkedin.com)

How do you decide what information your willing to share with which AI?

Almost all of my purposeful use of AI only includes information that has no sensitivity.

The whole world can know that I can’t remember the calling convention of some particular programming library and can infer that I’ve switched programming languages and something about what I am working on.

But almost all is not all. Some might give more than a hint that I or someone in my circle has a particular problem. This is easy enough to partially protect through the use of false identities with disposable email addresses and browser profiles. Far from complete protection, but probably good enough.

Some might give hints (or more than hints) about an invention or business plan. This is a more serious problem.

I recently made a new CustomGPT that people would likely share confidential business data with. I’d never be able to see that data, which is well and good, but..

There is now an option that didn’t exist when I built my last one. It is hidden off the screen at the bottom, saying “Additional Settings.” Once I opened the toggle, I found a checkbox

[X] Use conversation data in your GPT to improve our models

that defaulted to YES. It didn’t exist before. I didn’t know to look for it. Had I not found it, then users who trusted me would have been inadvertently, unknowingly, and most of all, scarily putting their secrets at risk.

At this point, I am wondering whether I have to run my own language models in order to keep my users safe.

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Posted by Russell Brand

Russell has started three successful companies, one of which helped agencies of the federal government become very early adopters of open source software, long before that term was coined. His first project saved The American taxpayer 250 million dollars. In his work within federal agency, he was often called, “the arbiter of truth,” facilitating historically hostile groups and factions to effectively work together towards common goals

 

Future Cold War (www.linkedin.com)

 

I hate “AI” on LinkedIn (www.linkedin.com)