As early as 2017, Andrej Karpathy – then Director of AI at Tesla – tweeted: “Gradient descent [i.e. a mathematical method for training neural networks] can write better code than you. I’m sorry.” That same year, he coined the term “Software 2.0”, describing a shift toward applications built on neural networks rather than traditional, hand-written code.
Eight years later, Artificial Intelligence (AI) has become an omnipresent force in our daily lives. During a typical morning commute, one might follow a route optimised by Google Maps and narrated by Siri while passing digital billboards displaying advertisements unmistakably generated by AI. Meanwhile, driver assistance features such as Lane Assist subtly intervene to help maintain focus on the road. At work, tools like GitHub Copilot, Theia Coder, and ChatGPT are increasingly regarded as dependable digital collaborators, enhancing productivity across various tasks. By evening, social media platforms present a curated gallery of AI-generated content, where individuals transform themselves into anime characters or hyper-realistic action figures. In other words, there's no denying the profound impact of AI, and especially Generative AI, is having on our lives, be it for playful or useful purposes; and as technology innovators, we feel constant pressure to harness its potential in ways that are not only meaningful but also commercially viable.
The good news is that pressure is what creates diamonds – and despite all the hype and buzzword bingo, there are quite a few developer teams who have succeeded in developing highly useful and business-friendly technologies by jumping on the AI bandwagon. With our project Eclipse LMOS (short for: Language Model Operating System), we strive to achieve exactly that – be it for customer-facing telecommunications services or other areas such as automotive – while adhering to European data privacy standards.
It all started with a modest beginning: In 2023, a team of five engineers at Deutsche Telekom set out to explore how LLMs could be applied to customer sales and service, specifically in Telekom’s digital assistant “Frag Magenta” (Ask Magenta). The task was to roll out GenAI across Telekom’s European operations in ten countries, with different business processes, APIs, specifications, and languages.
Fast forward two years, and what started as a weekend pilot by a handful of dedicated team members has evolved into a fully-fledged AI project, hosted by the Eclipse Foundation, now powering real-world applications. What is more, it’s designed to comply with European data privacy standards and thus could contribute significantly to European tech sovereignty.
A grassroots initiative within a large enterprise, driven by a true startup spirit and a passion for innovation? Absolutely. These have been the core forces propelling this project forward. Let’s take a closer look at the evolution of the LMOS platform and its most notable features.
From RAG and LangChain to JVM and Kotlin
After initially looking into RAG-based systems (Retrieval-Augmented Generation) and the LangChain paradigm in 2023 in search of a modular and scalable approach to leveraging Large Language Models (LLMs), it soon became clear that our team would need to think outside the box and build something completely new from the ground up, “the next Heroku for agents”, as we phrased it in another article. To ensure the project would not merely serve as a sandbox for AI technologies, but also be deployable, scalable, and sustainable over the long term, we needed to come up with the most effective architectural paradigm to support it.
Given the investments Deutsche Telekom had already made in the JVM stack for infrastructure, which included microservices and transactional API patterns, we started building a new agent framework in Kotlin in October 2023, replacing the original LangChain version within a few weeks. We chose Kotlin as our primary language for two main reasons. First, we had already made significant investments in the JVM ecosystem. Second, Kotlin offered specific advantages that aligned with our needs. We recognised the importance of democratising our technology stack, which could be achieved through the use of Domain-Specific Languages (DSLs). Additionally, the nature of our application demands advanced concurrency capabilities, both challenges that Kotlin addresses natively.
The Kotlin-based DSL we refer to as ARC enables developers to effortlessly create AI agents simply by describing them in natural language. Thanks to full access to the Java ecosystem and Kotlin’s powerful features, transitioning from a basic proof-of-concept to a scalable system is both seamless and efficient.
Another feature that emerged during the development of the Eclipse LMOS platform is the LMOS Protocol. As we explained in an interview, each agent operates with its own lifecycle, which makes it essential for them to have a way to discover one another and negotiate their communication protocols. This is precisely the role fulfilled by the LMOS protocol within the platform
Towards an Internet of Agents
The LMOS Protocol is a path towards what we like to think of as an “Internet of Agents.” The idea is to create a multi-agent operating system designed for internet scale, where AI agents and tools from diverse organisations can be effortlessly published, discovered, and interconnected – independent of their underlying technologies. This vision draws on the paradigms and evolution of the Social Web and the Internet of Things (IoT), extending their foundational principles to enable web-native multi-agent ecosystems.
An "Internet of Agents" enabled by the LMOS Protocol. Source: https://eclipse.dev/lmos/
How is this new approach to an operating system different from traditional ones? We recently explained this difference in a talk at the SDV Community Days in Rotterdam: With the LMOS project we initiated, we set out to shift this approach into the realm of Generative AI. The idea was to treat LLMs as the Core Processing Unit – the engine that performs the heavy lifting and delivers the core functionality for your applications. By providing the model with short-term memory and a set of auxiliary tools, we envisioned building a kind of platform operating system atop this foundation. Developers could then create applications on this platform – only now, these applications are typically referred to as "AI agents." There is another difference in that agents, as opposed to classical applications, have a so-called “Planning Phase:” Rather than relying on a fixed algorithm as in a classical application, the agent processes input based on what might be called an ad-hoc algorithm generation at runtime.
Classical operating systems with applications as the top layer vs. LMOS with agents. Slide from our presentation at the SDV Community Days at Lunatech on 25 March 2025.
Open Web: (European) Sovereignty by Design
By empowering development teams to create their own agents and participate in the emerging “Internet of Agents,” we, the Eclipse LMOS developers, aim to position the project as a foundational part of the Open Web and open ecosystems. Our vision is to maximise the sovereignty of individuals and organisations, enabling them to build vendor-neutral agents that operate independently of large corporations while allowing developers to respect all relevant data privacy regulations. We believe AI Agents should not only be developed on the platforms of a few major companies, but also independently by a diverse community of stakeholders. A positive side effect of this is that LMOS could also help strengthen European technological sovereignty.
A Call to Action
Eclipse LMOS’s collaborative and inclusive approach – along with our vision of building an open ecosystem – is also a call to action for developers around the world to contribute to the evolution of Eclipse LMOS to be the foundation for Agentic Computing. As is often the case in open source, the saying "the more, the merrier" certainly applies. That’s why we warmly invite everyone to join us in driving the democratisation of AI agents by building toolkits, infrastructure, and open protocols to support the emerging wave of agent computing.
What if, for a change, the best-in-class platform to build, manage, and govern agents at scale was open from the start – secure by design, leaving the closed ones scrambling to catch up? If that resonates with you, we'd love for you to get involved. Visit the project’s website, where you will also find instructions on how to contribute, or check out the talks published on the LMOS YouTube channel.
Authors: Kai Kreuzer and Arun Joseph
Contributing authors: Robert Winkler, Jasbir Singh, Patrick Whelan, Amant Kumar, Patrick Schwager