The data flow explained

We show you step by step and completely transparently what happens to your data, from the moment you enter it to the AI's final response.

Introduction: Transparency as the Foundation

To fully demystify the "AI black box," we’re laying out our entire process chain in this article. This gives you full control and the confidence that your data is processed securely and exclusively according to your wishes.

Overview: The Journey of Your Request

In simple terms, here’s what happens: Your input is securely encrypted and sent to our platform in Frankfurt. From there, it is processed and forwarded to the AI model you selected. The AI’s response is sent back to our platform and delivered to you securely encrypted.

The data flow in detail: A journey in 5 steps

1. Your secure input (prompt)
You enter a message or upload a file. From the very first character, all communication between your browser and us is protected by strong TLS (Transport Layer Security) encryption. Your request lands on our application server in Frankfurt am Main.

2. Context & Enrichment on Our Platform
Our software in Frankfurt prepares the request for the AI to provide it with the best possible context for a precise response. To do this, multiple information sources are intelligently combined:

  • Chat history: To ensure the AI doesn’t lose track of the conversation and can refer to previous messages, the entire chat history is securely linked to your new request.

  • Reminders: If your request requires content from your "Reminders," this relevant information is also automatically added to enable the AI to gain a deeper, personalized understanding.

3. The Request to the AI Model (LLM)
Our platform sends the prepared data package (your prompt + chat history + context) via a secure API interface to the AI model you selected. This is where your admin settings for model hosting (EU/EU, EU/Global, Global/Global) and our contractually guaranteed zero-retention policy come into play. The model processes the request and immediately "forgets" it afterward, without using it for future training.

4. The Final Response & Secure Storage
The AI’s response is sent back to our platform in Frankfurt. We store the conversation (your prompts and the AI’s responses) in our database so you can access your old chats at any time.

  • Protection: Your data is protected multiple times by AES-256 encryption (military standard) and Row-Level Security (RLS).

  • Deletion: Chats that have been inactive for 180 days are automatically and irrevocably deleted.

5. Delivery to You
The finished response is sent from our servers to your browser via TLS encryption and displayed there for you. The cycle is complete.

Special Cases: When Tools & Files Come into Play

Intelligent tool usage
The AI model independently recognizes when it needs additional information for a response (e.g., current news via web search) or must perform actions. It requests this from our platform. Our platform executes the tool securely and in isolation and returns only the raw result to the AI. This allows the AI to formulate a well-founded and up-to-date response without having direct access to external systems itself.

Working with your files (RAG)
We use a hybrid strategy (Retrieval-Augmented Generation) to offer you the best possible experience when working with documents:

  1. Our own RAG system: When you upload a file, it is processed on our servers in Frankfurt and stored in a secure vector database. The benefit for you: The file’s content remains available in the chat and can be used by the AI, even if you switch AI models mid-conversation.

  2. Native RAG of the AI model: If the selected model has its own file-processing function, we use it in parallel for maximum efficiency in answering your query. Of course, our zero-retention policy applies here as well.