Between a promise and a threat: from the fall of assistants to the rise of AI agents
Juan Ignacio Moreno, Head of AI Solutions & Strategy for El Economista
Artificial intelligence is here to stay, that is undeniable, but the pace these applications are being developed at is not acceptable either for the market or the public. We are at a time of disruption in the approach of artificial intelligence solutions. In 2024 we were focused on the conceptualisation of the so-called “assistants” or “deterministic agents,” this is, AI supported by large language models (LLMs) which work on demand and execute specific tasks, usually aimed at classifying, summarising or generating new information.
One year later, there seems to be a general agreement among all stakeholders to focus the design of all productive solutions on ‘multi-agent’ architectures, which, when faced with a specific request, can make routing and classification decisions, and generate and group information proactively and autonomously. Based on their own criteria and the preferences of their human users, who gradually get to know and who can easily adapt to. They do this by breaking complex problems into subtasks and deciding by themselves -in each interaction- the specific actions which have to be done to respond to the request.
In any case, it is worth stepping back from the eye of the storm and reflecting with some temperance: is this market maelstrom in line with the spirit of the EU AI Act? Are these new “agents” complementary to the “assistants” or do they cannibalise them altogether? What about costs and their control? These are just some of the questions that deserve, at the very least, a proposal for action that addresses competitiveness, control, security, regulatory compliance and brings together the principles of responsible AI development.
The short answer is yes. Indeed, among the main objectives of the European artificial intelligence law is to establish a harmonised regulatory framework to ensure that AI systems are safe, ethical and reliable, while at the same time fostering innovation and competitiveness.
However, what is possible should not be confused with what is recommended. The fact that we have the capacity to develop multi-agent AI solutions to address “any” use case does not mean that we should deploy them on a massive scale without first stopping to assess their impact.
The development of AI-based solutions must start with a real understanding of the problem to be solved. The best answer does not always come from the latest technology; very often, adapting established technologies offers greater effectiveness, fewer risks and a more sustainable integration, as well as ensuring the certainty linked to exploitation costs.
We are living in a time of constant and continuous revolution. Major manufacturers are pushing ahead with the development of new technologies based on artificial intelligence. This time, with unified efforts and an equal vision of the evolutionary development of their cloud platforms for AI services: the development of solutions based on multi-agent frameworks. Driven by the voracious need to sell new services, with an investment capacity and in a scenario of competition never seen before.
The market, meanwhile, is moving at a slower pace, unable to keep up with the frenetic pace set by tech giants. It is still fearful of the productisation of solutions and their integration into business processes. And attentive to the evolution of regulation and the lack of maturity of a technology burdened by its incredible speed of deployment.
We have the capability to create autonomous agents but also the ethical responsibility of deciding how far to go.
In this situation, it is worth stopping to think about the best solution for each business need, be it based on agents, assistants or even more traditional techniques which do not use large language models and still are perfectly valid and effective for many use cases. And do not be misled by the blindness that can be produced by trends and the marketing pressure we are increasingly subjected to by the large cloud service providers.
Agent-based and wizard-based approaches to solutions can and should coexist in the future, as the optimal and most appropriate response to one or the other need. This is perhaps one of the most sensitive issues. It is not just about efficiency, productivity or cost savings, but about accountability, privacy and digital rights. Will we let an AI agent decide on the approval of a loan, the assessment of a professional profile or the diagnosis of a patient without human supervision?
It is clear that we can design systems that work without our intervention, but we must also be clear about what we are not willing to delegate. Not out of fear but out of principles. Because not everything that can be automated must follow the same approach to achieving automation.
In the end, it is worth reminding that “slow and steady goes a long way.” We should not confuse progress with haste. We have the technological capability to create autonomous agents but also the ethical responsibility to decide how far we want to go, and with which guarantees. To act in line with the spirit of the European regulation which provides a framework for the development of AIs with a humanist approach, safeguarding fundamental rights.
The choice is not between moving forward or slowing down, but rather between moving forward well or taking a shot in the dark. If we want an AI that adds to and strengthens our capabilities without eroding our principles, we must design it with a purpose, with control and with common sense.