#DontTrainOnMe: Are you Polluting your LLM Brand?

Challenge: What is the common thread between these stories?

2024

People: We can just use the free GPT tools to create and edit our documents.

You: I feel like there is risk here but I can’t put my finger on it yet.

Companies: We need to catch up in the AI game by training on our customer data, let’s just change the terms of service.. (Adobe, Meta, Slack, ZOOM)

You: Eh, what’s one more privacy checkbox I don’t read matter anyway…

2003

People: We can just upload all these party pictures to Facebook for fun.

You: I feel like there is risk here but I can’t put my finger on it yet.

1990

People: The ocean is huge and can just swallow all this plastic.

You: I feel like there is risk here but I can’t put my finger on it yet.
Future you: turns out maybe it isn’t so great to drink microplastics.

(Special thanks to Eric Antebi from the California Health Care Foundation for this one 😉 

Bigger is Different

The high-level lesson across these stories is that bigger is different and that data/plastic from the past doesn’t just disappear. There is a massive risk with using LLMs that train on your data, and part of that risk is that we don’t fully understand the scale of that risk as it gets bigger.

According to the World Wildlife Foundation, the average American consumes a credit card a week. Turns out a little bit of plastic dumped into the ocean over time adds up and comes back to hurt us. This is a potent metaphor as LLMs (Large Language Models) are described as being trained on oceans of web and other data. So, the risk of just putting a small amount of your sensitive data in the next model is like a drop in that ocean, not a problem.

But what if that drop is actually plastic polluting the person, topic or brand you work with? Leaders like OpenAI explain that these are probabilistic models that respond to prompts based on their understanding of natural language – they are not just copying from training data. However, it has been shown that these models can be ‘tricked’ into revealing training data, something that the New York Times based their lawsuit on (How to trick GPTs into releasing training data).

In 2024, OpenAI responded publicly to the lawsuit by calling the plagiarized content a “rare bug” caused by “intentionally manipulated prompts” to get “our model to regurgitate.” So you’re telling me there’s a chance? 

How much do you want to take that chance with your company’s internal documentation like HR reports, unpublished research, risk audits, and personally identifiable information? The truth is that ‘rare bugs’ are not that rare when there are active groups hunting for these vulnerabilities. Fore example, here is just one guide to how to trick a GPT store model into releasing its training data.

Bigger is different. As your company or nonprofit starts your generative AI journey, avoid putting plastic in your LLM ocean. Educate yourself and the team about why using LLMs that train on your data over time may hurt your brand or cause. 

Yeah, I know I am being a little extra with the alarm here. I am actually a huge user and advocate of leveraging AI for impact. 

The truth is that one HR doc or internal competitive go-to-market analysis associated with your brand is virtually invisible in the context of a trillion-parameter model. 

The truth is that one party photo of you on Facebook from the 00’s (probably) won’t ruin your career.

The truth is that one plastic bottle tossed into the ocean won’t hurt you directly. 

But remember my point – bigger is different. Especially when the rules of the platform aren’t final and the use of these models is still finding their outer edges. Much like when early users of Facebook thought it was just a college network and couldn’t imagine their future careers would involve using that same profile to manage millions in client advertising. If you don’t pay for the product, you are the product. 

This is why we built CauseWriter.ai for the Whole Whale team. To have access to models that are protected by the API agreements from the best platforms in the market. It may not have the bells and whistles of the latest AI-integrated writers on the market, but it is built to let you benefit from your data rather than large AI companies. 

How can I tell if a model is training on me?

If you aren’t paying for the product you are VERY likely being trained on. Though, even if you are paying for the product, you still may be trained on in the case of OpenAI…

Since writing this we have gotten a lot of questions about how to identify this. The answer is it is tricky because so many models and interfaces have obscured this disclosure. So, here are some examples of what it looks like when an LLM is disclosing that it is training on your content.

OpenAI example

OpenAI shows plan options and just omits that despite paying for a Plus level $20/mo they don’t list that data won’t be used for training. By following the enterprise policy, you can then see the explicit statement about training for the different service levels.

Here is an example of an option for GPT Store creators to actually select whether they wanted to use data to improve models. This option has since been removed, so assume the default is that all Store GPTs will be using your data to train unless otherwise noted.

Perplexity AI

Perplexity.ai is a AI first search engine that does a great job of citing its sources. Search chat content is used to train models unless users optout. Here is what that looks like:

Anthropic.com

According to an updated commercial TOS, their Claude model doesn’t use data to train new versions of their LLM. Though they do keep and review logs for compliance. Here is what this looks like in a statement “Anthropic may not train models on Customer Content from paid Services” – key word is PAID here.

Google Gemini Pro

Google Gemini that also powers the Bard Search is tough to figure out from a LLM training perspective. They collect and use data for safety reasons and have always saved and used search data for ads. For developers using the Gemini it looks like there are full data controls and information won’t be used for training – for now…

OpenRouter AI

This is an awesome clearing house for connecting with leading LLM APIs. While this is mainly used by developers, it is interesting to view some of the decisions that can be made that impact privacy of APIs that are routed through. Example of data privacy in the model description:

Another setting that is available offers a 1% discount if developers allow data to be used for categorizing training.