Scanning the AI Horizon: 3 Key Trends from EEML 2025
After EEML 2025, Adnan outlines three AI trends founders must watch: agentic multi-agent systems rising beyond monolithic LLMs, specialized compliant ML for healthcare, and emerging limits of LLMs driving new architectures. Read his key takeaways.

The Eastern European Machine Learning (EEML) School isn’t your typical conference. This year, it was a vibrant conference where researchers and engineers dissected the real challenges of AI, where it’s going and what’s breaking along the way moving far beyond the scope of classical machine learning.
EEML 2025 revealed one thing clearly: the future of AI won’t be built by monolithic models alone. The buzz around Agentic AI is palpable. It feels as though we are approaching the technological limits of what monolithic Large Language Models (LLMs) can achieve on their own.
It will come from specialized, collaborative systems stitched together with precision, purpose, and a lot of debugging.
After attending EEML 2025, here are three themes that stood out.
1. Agentic AI and the Rise of Multi-Agent Systems
The hype around Agentic AI isn’t just talk - it signals a structural shift in how we think about building intelligent systems.
As Large Language Models (LLMs) start to plateau in what they can achieve alone, the industry is turning toward ecosystems of smaller, purpose-driven Agents. These Agents don’t just execute tasks — they collaborate, delegate, and learn to act more like teammates than tools.
New frameworks like LangGraph and CrewAI are enabling this behavior today, allowing agents to coordinate, negotiate, and perform in tandem. The challenge now is enabling deeper cooperation — with protocols like Google’s proposed Agent-to-Agent (A2A) communication laying early groundwork.
Crucially, this wave isn’t just for ML specialists. It’s opening doors for classic backend and server-side engineers - especially those skilled in integration and architecture - to join the AI development frontier.
Technologies like Retrieval-Augmented Generation (RAG) and vector databases are now standard components in these systems, bringing memory and contextual intelligence into the loop.
If 2024 was the year of prompting, 2025 is shaping up to be the year of orchestration.
2. Machine Learning in Healthcare: High Stakes, Hard Problems
While agentic frameworks dominated future-facing discussions, there were also multitudes of fascinating talks on specific applications of classical ML. One that stood out was a deep dive into machine learning in medicine, where Emma Rocheteau showcased the profound difficulties researchers currently face.
At first glance, medicine looks like a perfect playground for ML because of the sheer abundance of available data and the potential for proactive, model-based predictions. The reality, as always, is far more complex. While a few gems exist, most current AI/ML systems in healthcare are not well-optimized.
The core problem lies within the data itself. Medical data is often messy, biased, and sparse, creating huge, misleading correlations between features that can derail a model's real-world performance. On top of that, the impact of a mistake is critically high. While a certain margin of error is acceptable in many fields, it's a different story when you're gambling with human lives.
To combat this, the field is moving towards domain adaptation and federated learning. The strategy is to train specialized models on specific, curated datasets to make predictions for very particular problems. By working with smaller, cleaner datasets where spurious correlations are easier to spot, researchers can lower the chance of catastrophic errors and avoid confusing models with the demands of unspecialized data. This mirrors the pattern seen in agentic software - we are creating specialized models for specialized tasks. Medicine is moving toward a multi-model AI system, not an all-in-one solution.
MLOPS in this field is also a fascinating challenge, as much of the training data contains sensitive information that is not compliant with regulations. A significant effort is underway to sanitize this data and build robust validation checks to ensure compliance with standards like HIPAA and GDPR.

3. Hitting the Limits of Large Language Models?
In one of the most engaging overviews of the week, Razvan Pascanu’s keynote pulled no punches. After years of riding the LLM wave, we may be reaching its natural limits.
Two contradictory forces are shaping the next phase:
- A push for bigger context windows to make models feel more “aware”
- A race to build smaller, local models that run efficiently on-device
This tension came into focus right after the summit, when OpenAI released a compact, open-source GPT model - joining Meta’s Llama series and others in the fast-growing ecosystem of lean, local-first AI.
Pascanu’s conclusion? If we want another leap like the one Transformers gave us, we’ll need entirely new model architectures - not just larger GPUs or more data.
The Road Ahead
To me, the key takeaway from EEML is that AI is unequivocally here to stay. It has real, tangible use cases and will fundamentally change the way we interact with technology.
But the defining questions remain. Who will be the next big player to introduce a new architecture with the same impact as the Transformer, which kicked off the entire LLM era? And how do we continue to cost-optimize existing LLMs, using them as a foundational layer for interaction, task delegation, and domain specialization to solve the world's most complex problems?
The conversations at EEML suggest the answers will come not from a single, all-powerful model, but from a collaborative ecosystem of specialized intelligence.