Zero retention policy

Under no circumstances will your data be used to train AI models - neither by us nor by our partners.

What does "Zero-Retention" mean?

"Zero-Retention" is our ironclad promise to you: Your content will never be used to train AI models.

This principle is a core component of our privacy philosophy and applies to all your interactions on the InnoGPT platform.

Our zero-retention guarantee in detail:

  1. No training with your inputs: Every question, every command, and every piece of uploaded content you enter into the platform (your "prompts") is used exclusively to process your current request. Afterward, this data is not stored or analyzed for training purposes.

  2. No training with AI responses: The responses and results generated by the AI are also not used for training or improving the models.

  3. Contractually guaranteed: We not only promise this to you, but have also firmly enshrined it in the contracts with our technology partners (the providers of the LLMs). The providers are obligated not to use your data for training their models.

  4. Your data, your control: You retain full sovereignty and sole ownership of your content. What happens in your workspace remains your intellectual property.

Terms of Use of LLM Providers

Transparency is extremely important to us. Although our zero-retention policy applies to all models we integrate, each provider has its own specific terms of use. For a complete overview, you’ll find a table here with links to the respective terms of the models we use.

LLM

Provider

Terms of Use

OpenAI Models

Microsoft Azure

innoGPT has entered into a Customer Agreement (Customer contract) with Microsoft regarding the use of Azure Services. innoGPT uses the Azure Services based on this Customer Agreement as well as the Azure Terms of Service (Product specifications), which include Microsoft’s General Data Protection Regulation (Data protection and information security obligations). innoGPT has also entered into a Data Processing Agreement (Order processing contract) with Microsoft, which governs data processing by Microsoft. In this agreement, Microsoft has committed, among other things, not to disclose or make the data accessible. Specifically, this means that all prompts, outputs, embeddings, and proprietary training data will not be (i) made available to other users, (ii) shared with OpenAI or other model developers, (iii) used to train the models, or (iv) used to improve other Microsoft services. Microsoft guarantees the following in the privacy and security provisions for Azure Core Services: "If the customer configures a specific service to be deployed in a data center within a major region (each referred to as a 'Geo'), Microsoft stores the customer data-at-rest within that specific Geo." In our case, this is Frankfurt.

For services that Microsoft provides for specific geographies, Microsoft does not store or process customer data outside the geography specified by innoGPT (EU). However, individual LLMs that must be actively selected by innoGPT customers are provided by Microsoft only “globally.” For these LLMs, Microsoft may process prompts and completions in any Azure OpenAI region worldwide. In such cases, data transfers abroad are subject to EU data protection regulations, which ensure the protection of personal data. We have agreed with Microsoft on the applicability of the EU Standard Contractual Clauses, and Microsoft is also certified under the EU-US Data Privacy Framework. Data at rest is also stored in the geography specified by innoGPT (EU) in these cases.

Anthropic Models

Amazon Bedrock

innoGPT has entered into an AWS Service Level Agreement (Customer contract) with AWS. innoGPT uses AWS Bedrock Services based on the AWS Service Terms. In these Terms, AWS has committed not to use content processed via AI services for training models or for improving other AWS services. The fact that Amazon does not use the data for training purposes is stipulated in the AWS Service Terms and the DPA, and is also summarized in the User Guide: “Amazon Bedrock doesn’t store or log your prompts and completions. Amazon Bedrock does not use your prompts and completions to train any AWS models and does not distribute them to third parties." innoGPT has also entered into a Data Processing Agreement (Order processing contract) with AWS that governs data processing by AWS. In it, AWS has committed, among other things, to treating the data confidentially, not disclosing it to third parties, and processing the data only within the European Union.

Google Models

Google Cloud Platform

innoGPT has entered into a Service Level Agreement (Customer contract) with Google. innoGPT uses Google Cloud Platform services based on the Google Cloud Platform Terms of Service (Service Specific Terms). In these Terms, Google has committed not to use customer data to train or improve AI/ML models without the customer’s prior consent. Accordingly, Google also states in a Google Cloud Guide to generative AI products and in a Declaration on data protection obligations for cloud-based AI products that Google Cloud does not use customer data to train its Foundation Models by default: “Customers can use Google Cloud’s Foundation Models because they can be confident that their prompts, responses, and all training data for adapter models will not be used to train Foundation Models.” In addition, innoGPT has entered into a Data Processing Agreement (Order processing contract) with Google Cloud that governs data processing by Google Cloud. In the Service Terms, Google Cloud also guarantees that if the customer selects a specific region or multi-region as the data location, Google will store the customer’s data only in that selected region or multi-region.

Mistral Models

Mistral AI

innoGPT uses Mistral AI’s models via a direct API connection. According to the Privacy policy of Mistral AI, inputs and outputs processed via the paid API services (“paid version of our APIs”) are not used for training the AI models. Mistral AI stores this data for a period of 30 days for abuse monitoring before it is deleted. As a European company based in France, Mistral AI is directly subject to the strict requirements of the GDPR and processes the data on servers within the European Union. A Data Processing Addendum (DPA) has been concluded with Mistral AI to regulate data processing.

OpenAI Image Models

OpenAI

innoGPT has entered into a contract with OpenAI for the direct use of API services. In accordance with the "Data protection principles for business customers" and the "API usage" published by OpenAI, OpenAI explicitly commits not to use customer data transmitted via the API to train or improve its models. This commitment applies by default. innoGPT has not agreed to any deviating use. To detect and prevent misuse, data may be retained for a maximum period of 30 days and then deleted. For data processing, innoGPT has entered into a Data Processing Addendum (DPA) with OpenAI. Data transfers to the U.S. are safeguarded by the EU Standard Contractual Clauses (SCCs), which form part of this DPA.

Perplexity Models

Perplexity AI

innoGPT has entered into an agreement with Perplexity AI for the direct use of its API services. In accordance with the API terms of use of Perplexity, the company explicitly undertakes not to use customer data (“Customer Content”) to “train, retrain, refine, or otherwise improve generative AI models.” A Data Processing Addendum (DPA) has been signed with Perplexity AI for data processing. Since Perplexity AI is a U.S. company, data transfers are safeguarded by the EU Standard Contractual Clauses (SCCs) included in the DPA, as well as Perplexity AI’s certification under the EU-U.S. Data Privacy Framework, to ensure adequate protection of personal data.

Image Models

Replicate

innoGPT integrates image generation models via the Replicate platform and has entered into an agreement for this purpose. In accordance with the Privacy policy of Replicate, it is explicitly guaranteed that the inputs and outputs that users submit to the service (“Service Data”) will not be used to train Replicate’s own models. This also applies to the open-source models hosted on Replicate. It is important to note that predictions created via the API, including all inputs and outputs, is automatically deleted from the server after one hour. Data processing takes place in the United States. To ensure an adequate level of data protection for data transfers, a Data Processing Addendum (DPA) has been concluded with Replicate, which incorporates the EU Standard Contractual Clauses (SCCs) as its legal basis.