AI in the IDE: First impression of the IONOS Model Hub in Eclipse Theia

In recent posts, we’ve looked at how AI is changing our software developers’ everyday lives – from automated code suggestions to new opportunities through models hosted in Europe. We talked about language models and their availability in German and European environments, as well as IDEs and tools that integrate AI support.

This is where we start today: What does a development environment look like in which AI is not just additive, but is an integral part – all while paying attention to European cloud solutions and data protection? The focus is on the AI offering of IONOS, the Model Hub, in conjunction with Eclipse Theia – a combination that promises to deliver the best of both worlds: AI support directly in the IDE and hosting that meets the requirements of European data protection standards.

Install Eclipse Theia

The Theia IDE can be found on the website theia-ide.org If you want, you can alternatively try the online version: https://try.theia-cloud.io/ However, an account login is necessary for this. In line with the topic of data protection, the following information can be found on the website:

We only need your login data for preventing misuse. We never contact you via your e-mail and will never use it for any other marketing purposes.

However, with a look at the upcoming posts, I recommend installing the desktop version. Since I have personally been a satisfied Linux user for years, I have chosen the Snap image variant. There are also variants for Windows or MacOS available.

Visually, the Theia-IDE can hardly be distinguished from Microsoft’s Visual Studio Code.

Screenshot of the Eclipse Theia IDE

This is not surprising either, as Eclipse Theia and Visual Studio Code are technically very similar. Both use the Monaco Editor, Language Server Protocol and Debug Adapter Protocol. While VS Code is developed by Microsoft and scores with its huge expansion ecosystem, polished features and broad community, it is partly proprietary and linked to telemetry. Theia, on the other hand, is fully open source under the Eclipse Foundation, modular and particularly customizable. It supports many VS code extensions via the Open VSX standard, but also offers its own extension options and more control over data protection and infrastructure.

A first impression of the AI Chat View

After the first start, no AI functions are activated by default. The user can do this directly either via the notification on the home screen, the settings editor or by opening it through the AI chat view:

Position of AI Chat View in Eclipse Theia

As soon as the activation has taken place, the user has the opportunity to ask the IDE questions. Interesting is the reference to the AI agents. By default, Theia offers eight agents. The Coder supports programming directly in the workspace, while the Universal Agent answers general questions, and the Orchestrator selects the appropriate helper automatically. The Command Agent helps with Theia commands, while the Architect advises on project structure and planning. In addition, there's a Code Completion agent for inline proposals, a Terminal Assistance for commands in the terminal and the App Tester for end-to-end testing of web applications. Starting asking questions straight away is unfortunately not possible.

Before use, Theia must be adapted for operation with the desired LLM provider. The IDE currently supports various suppliers in different degrees of maturity. "Public" are OpenAI, OpenAI API-compatible providers, Azure and Mistral. The support of Anthropic is in the beta phase, the one for Ollama in the alpha phase. Vercel AI, Google AI, LlamaFile and Hugging Face are marked as experimental.

IONOS GPT and the registration for the Datacenter Designer

The integration of providers with an OpenAI-compatible API makes it possible to use the IONOS offering. The AI Model Hub is a fully managed platform that allows organizations to access leading open source AI models without having to worry about infrastructure or hosting. The models are hosted in German data centers, which ensures maximum data security and GDPR compliance. I have chosen the platform for this article series, as not only registration is simply possible without a sales agent, but the offer can also be used free of charge until the end of September. By default, the platform offers a graphical interface that is very close to the Open-Webui project Similar to Theia itself, IONOS also offers the user eight different AIs to choose from.

and the eight offered agents

Mildly annoying: If you register for IONOS-GPT, you will apparently also be included in the newsletter dispatch. From July to September, I received 31 emails. That reminds me of Marcell Davis. It seems that the Montabaur company cannot separate itself completly from the former face of advertising. So cancel the newsletter right away. For the use of the API of the IONOS model hub, however, a complete registration is necessary. Here, one encounters one of my personal annoyances when registering for online services: The length of the password is limited. Why I can't use any long password in the age of password managers is absolutely beyond me. After registration, the full version of the "Data Center Designer", DCD for short, has to be activated and a user as well as a suitable key have to be created.For the full version, the customer unfortunately has to register with his full address and a credit card. The activation is then checked within 24 hours. Unfortunately, I did not receive an e-mail note once the the examination was successful. Instead of a new user, IMHO a service user would be desirable at this point, because for the second user – who is actually only a "technical" – an e-mail address & a password is also required. Using the same e-mail as for the admin account, leads to an error.

Since, unlike the assistants of Google and Microsoft, we cannot login directly from the editor with a login flow, I choose the longest option for the lifetime of my token – 365 days.

Connecting IONOS AI hub with Theia

Having generated the token, we can now connect Theia with Ionos’ AI offering. The AI Model Hub currently offers access to a total of 13 models, with only one dedicated coding model ("Code Llama 13B") being offered. In addition, seven models are available for text generation, two models for creating images and three embedding models. The complete overview can be found in the provider’s documentation.

In order to give Theia access to the models, the settings must be adjusted. I have not found a way to do this via the "graphical" settings interface. Instead, I had to resort to the JSON variant. This can be opened in the settings by clicking on the two curly brackets in the upper right corner. The IONOS documentation helps to determine the right properties. Accordingly, the following values have to be added:.

{
   "ai-features.openAiCustom.customOpenAiModels": [
       {
           "model": "meta-llama/CodeLlama-13b-Instruct-hf",
           "url": "https://openai.inference.de-txl.ionos.com/v1",
           "id": "IONOS Code Llama 13b",
           "apiKey": "eyJ0eXAiOi...Rest...Des...Tokens...Des...KI-Nutzers...aus...dem...DCD...",
           "developerMessageSettings": "developer"
       }
   ]
}

In the next step, we have to specify which model the Theia agents should use by default. This can be configured either via the Settings UI or in the JSON view. It is important here that we do not name the models after their technical name, but the "id" we used above.

"ai-features.languageModelAliases":{
        "default/code": {
            "selectedModel": "IONOS Code Llama 13b"
        },
        "default/code-completion": {
            "selectedModel": "IONOS Code Llama 13b"
        },
        "default/summarize": {
            "selectedModel": "IONOS Code Llama 13b"
        },
        "default/universal": {
            "selectedModel": "IONOS Code Llama 13b"
        },
    },

Theoretically, this enables us to use different models from the model hub for different tasks. For now, we rely on "Code Llama". At least that's how it was meant to be. Unfortunately, the first request in the AI chat failed with the following error message:

400 litellm.BadRequestError: OpenAIException - Error code: 400 - {'object': 'error', 'message': '', 'type': 'BadRequestError', 'param': None, 'code': 400} Received Model Group=meta-llama/CodeLlama-13b-Instruct-hf Available Model Group Fallbacks=None

O...kay. If you change the selected model to OpenAI's 120B OSS model ("openai/gpt-oss-120b"), it works. An attempt with the sovereign German model "Teuken 7B" also failed, as it did with MistralAI’s "Nemo 12B" and "Mistral Small 24B". In the web UI of the model hub of IONOS, however, all models can be used. The suspicion that the models in the documentation are possibly incorrectly identified was nullified by checking the API with the help of the call

curl https://openai.inference.de-txl.ionos.com/v1/models -H "Authorization: Bearer ey..."

Calling the text generation directly, was also successful

curl https://openai.inference.de-txl.ionos.com/v1/chat/completions \
  -H "Authorization: Bearer ey..." \
  -H "Content-Type: application/json" \
  -d '{
    "model": "meta-llama/CodeLlama-13b-Instruct-hf",
    "messages": [
      {"role": "system", "content": "Du bist ein hilfreicher Assistent."},
      {"role": "user", "content": "Schreibe eine Python-Funktion, die eine Liste sortiert."}
    ]
  }'

Again, it helps that Theia is open-source. There is no need for an expensive support request to a service provider, instead I have opened a discussion on GitHub. Since, unlike a commercial provider, I do not expect a direct answer, I will first use the OpenAI model for my further tests. More on that in the next part of this series.