Artificial intelligence promises to radically facilitate the everyday development of software developers: from automated code proposals to complex solution ideas in a matter of seconds. But the more tempting the possibilities, the greater the risks: How do you protect sensitive corporate or customer data when using AI tools? And how do you prevent the use of these tools from quickly becoming expensive?
Language models and their availability from German providers
In conversation with friends and families, it quickly becomes clear: If people speak of AI, they speak of OpenAI’s ChatGPT. Even those who deal with the topic in more detail quickly realize: AI models come mainly from the USA and China. But is that true? Which European alternatives exist and how can the foreign models be used – at least to some extent – in compliance with data protection regulations?
When talking about large-language models – LLMs for short – you have to distinguish between proprietary offerings such as GPT-5, Anthropic’s Claude or Google’s Gemini series and open-source models such as OpenAI’s GPT-OSS, Meta’s Llama or deepseek-AI’s DeepSeek. But how open are these models actually? When LLM providers talk about open-source, they often mean open-weight. Only in the rarest cases are the models truly open from the beginning (training data or e.g. source code) to the end (weights). Meta added another dimension to this picture in response to European regulations: geographical restriction. Llama 4, the current language model of the American company, cannot sensibly be used in Europe.
So what alternatives remain? On the commercial side, there is actually only one startup in Europe that can really keep up with the big providers in the field of language models: We are talking about Mistral AI and the LeChat series. A look at this LLM benchmark tool, provided by LMArena shows that the Codestral series can compete with international competition. But how can you use their innovative power? One possibility is to host the models themselves in your own "data center". Unlike the actual training, which consumes a gigantic amount of resources, interferencing, i.e. simply answering a user request, is significantly less computationally intensive. A look at the table below (*) shows however, that it is not so trivial and still requires a certain investment into hardware, or you have to compromise on the quality (i.e. a smaller model size).
| Model size | VRAM (Inference) | System-RAM |
|---|---|---|
| 1 B | – approx. 2 GB VRAM (≈ 2 GB per 1 B parameter) | – 16 GB DDR5 |
| 7 B | – ≈ 14 GB VRAM (2 GB × 7) | – 32 GB DDR5 |
| 70 B | – ≈ 48 GB VRAM (single 48 GB-GPU) or 2 × 24 GB With off-loading | – 128 GB DDR5 (or) 64 GB with strong quantization |
| 120 B | – 48 GB VRAM – Alternative 2 × 24 GB GPUs with off-loading | – 192 GB DDR5 (recommended) – 256 GB for comfortable buffers |
*The table was provided by GPT-OSS -20 and serves exclusively to illustrate how the hardware requirements increase with the amount of weights.
This is where hosting providers like STACKIT, the Open Telekom Cloud or IONOS come into play. Similar to AWS, Azure or GCP, customers have the opportunity to access hosted versions of the language models. IONOS is currently still particularly attractive: the various models can be used free of charge until 30 September 2025.
Integration into your own development environment
One of the most widespread integrations of an AI helper into its own development environment is probably GitHub’s – Nej Microsoft's Co-pilot in VSCode. Google's Gemini Code Assist can also be easily integrated. If you are thinking about using OpenAI’s API in VSCode, you have to look a little more thoroughly. Although there are also extensions for it, it is not straighforward. This is where Eclipse Foundation comes in. While I used the classic Eclipse for Java development during my studies, there are now a number of products that integrate parts of VSCode. This includes Eclipse Theia, a platform for developing web-based development environments. In March 2025, Theia AI was published. This framework enables a profound integration of artificial intelligence in development environments. The exciting thing for this blog series: The tool offers a wide range of support for AI providers.
Your own, European AI development environment
While Theia is already used by a number of partners, the interesting question as a cost-conscious, self-employed person is of course: How to use the current IONOS offer and Eclipse’s Theia AI. That is what the next part of the series will be about.
See you soon!
