AutoML, augmented analytics, and the generative explosion have opened the door. The question is whether we are prepared for what comes through it.
For decades, artificial intelligence was a limited territory. Building a machine learning model required deep statistical knowledge, mastery of specialized programming languages, and access to computing infrastructure that only large corporations and research centers could afford. AI was, in practice, a club with a considerable barrier to entry.
Over time, that barrier has eroded in successive waves. First with AutoML, which automated model building. Then with augmented analytics, that is, data analytics that, by using AI, placed advanced analytics in the hands of less technical business profiles. And now, with the universalization of generative AI, a third act is happening that changes everything because anyone with an idea and access to the Internet can create agents, skills, and intelligent workflows without writing a line of code.
However, as has happened with other technological developments, the universalization of AI forces us, as a society, to rethink some issues. In this post, we review some of them.
AutoML: the first act of universalization
The concept of AutoML (Automated Machine Learning) arose as a response to the bottleneck that existed due to the shortage of data scientists capable of building quality models. To understand its value, it is worth remembering what the traditional machine learning process entails:
-
Collect and clean data.
-
Select the relevant variables (feature engineering).
-
Choose from dozens of possible algorithms depending on the type of problem we want to solve (i.e. predict, classify or detect patterns) or looking for a better behavior of the model.
-
Adjust its hyperparameters. For example, configure in a neural network, the number of layers or neurons it has.
-
Validate the results with statistical metrics.
Each of these stages requires specialized technical criteria. What AutoML does is automate most of that pipeline, allowing the user to focus on defining the problem and providing the data.
Platforms such as Google AutoML, Amazon SageMaker Canvas or Azure Machine Learning allow, for example, a logistics manager to upload a shipment history and obtain a predictive model of delivery times without needing to know what gradient boosting isor how a neural network is calibrated. The system tests combinations of algorithms, optimizes parameters, and returns the model with the best performance. The expert brings what no algorithm can automate: knowledge of the domain and the context of the business.
According to industry data, the AutoML sector has gone from $2.34 billion in 2025 to a projection of $3.43 billion in 2026, with an annual growth of close to 47%. This growth is accompanied by rapid enterprise adoption, with many organizations already incorporating these capabilities or planning for their deployment in the near term. What a decade ago was frontier research is now standard functionality on major cloud platforms.
Augmented analytics: self-explanatory analytics
If AutoML democratized model creation, augmented analytics did the same with the extraction of insights, that is, with the obtaining of valuable conclusions. The central idea is that analytics tools incorporate artificial intelligence into every phase of the process, from data preparation to visualization of results, so that the user does not need to be an expert analyst to obtain actionable conclusions.
In practice, this translates into concrete skills, such as:
-
Automatic detection of anomalies in the data.
-
Generating natural language explanations for why a metric has changed.
-
Proactive analysis recommendations that the user had not requested but that the system identifies as relevant.
Consider a sales manager who looks at his scorecard and, instead of having to cross tables manually, receives an alert saying, "Sales in the southern region are down 12% this quarter, correlated with an 8% increase in supplier X's lead times". That is augmented analytics: an analysis that goes beyond the mere visualization of data, where the tool also interprets and contextualizes it.
Gartner has long pointed out how AI is transforming consumers of analytical content into creators of that same content. In other words, the line between those who made the reports and those who read them is blurring. In addition, the consultancy forecasts that by 2027, 75% of new analytical content will use generative AI to deliver contextual intelligence.
However, Gartner itself introduces a nuance that deserves attention: 60% of organizations will fail to realize the value of their augmented analytics use cases due to poor data governance frameworks. That's why it's so important to have strong data governance in place before deploying AI models.
The Generative Explosion: Agents and Skills Within Everyone's Reach
The third act came with generative AI. Some of the best-known models are ChatGPT, Claude or Gemini that put artificial intelligence in the pockets of millions of people as a consultation tool and, in addition, that opened something qualitatively different: the possibility of creating with AI.
Platforms such as n8n, Make or the Anthropic and OpenAI environments themselves today allow intelligent agents to be designed: programs that reason, consult sources, make decisions and execute actions through visual interfaces. A non-technical user can set up a functional agent in a range of 15 to 60 minutes, without writing code. You can build specialized skills, chain tools, connect APIs, and orchestrate workflows that three years ago would have required an entire engineering team to develop.
Today, those who build AI are no longer just engineers; they're marketers who automate content generation, operations managers who create assistants for their team, or consultants who design custom analytics flows. The creation of artificial intelligence has become, for a growing segment of professionals, a natural extension of their work.
The challenges of universalizing without trivializing
Beyond the opportunities, this technological openness has a downside that should not be underestimated. When the barrier to entry is low, more talent and more ideas come in, but so does more risk. And the risks of non-experts creating AI systems operate on several planes simultaneously.
-
The first is security: connecting language models to external tools and chaining agents multiplies vulnerabilities that are already problematic in isolated models. Most technical creators are unaware of the risk that can occur in using content with malicious instructions in the agent training process. A misconfigured agent accessing sensitive data can become an unwitting backdoor.
-
The second is governance: creating an agent is easy; ensuring that it operates within ethical and regulatory boundaries, not so much. Who is responsible when an agent built by a marketing analyst makes a decision that affects customers? Under what framework is a workflow audited that no one documented because the tool did not require it? Regulation is advancing more slowly than technology, and the regulatory vacuum is particularly pronounced in the area of autonomous agents.
-
The third, perhaps the most subtle, is the illusion of understanding: no-code interfaces are extraordinarily effective at hiding the underlying complexity. A user can build a working agent without understanding why it works, which means they won't understand why it fails when it fails either. And it will fail. The difference between a robust system and a fragile one often lies in design decisions that visual platforms do not make visible: the treatment of edge cases, the handling of errors, the calibration of uncertainty.
Universalize with open eyes
Despite everything, the universalization of AI is, on balance, good news. Expanding the number of people capable of creating intelligent solutions multiplies the capacity for innovation. And all of this translates directly into useful applications.
But just because something is easy to build doesn't make it easy to operate responsibly. The challenge of the coming years goes beyond the technical (improving the security of agents, developing governance frameworks or closing vulnerabilities). The challenge is also educational because we need those who create AI, even if they are not engineers, to understand enough of what they are creating to anticipate their failures, respect their limits and assume their consequences.
True democratization consists of giving tools along with the minimum understanding to use them well. And that is what must be guaranteed for the use of AI to be something universal.
Content produced by Juan Benavente, a senior industrial engineer and expert in technologies related to the data economy. The content and views expressed in this publication are the sole responsibility of the author.