Exclusive Column: Signals of Tomorrow. 4. AI: the “too” perfect technology

When the algorithm stops being a tool

There is something profoundly unsettling about the suspension of facial recognition in Essex. A system used by the police was suspended after showing very high performance but with uneven results across different demographic groups. At first glance, it might seem like any other story: a technology is halted, data is reviewed, errors are corrected, and then it restarts. It has always worked this way. Technology advances through attempts, mistakes, and improvements. And yet, not this time. Here, we are not dealing with a malfunction, nor with a system that does not work well enough. We are facing something more subtle and, for that very reason, more difficult to manage.

For the first time, a technology is not stopped because it is inefficient, but because it is too efficient in a way that those using it consider wrong. And this, if we truly think about it, completely changes the ground we are standing on.

For decades, we have judged technology using relatively simple parameters: does it work or not, is it faster or not, does it reduce errors or increase them. It was an almost engineering-like language, reassuring in its linearity. Today, that linearity is no longer enough.

Because a new dimension enters the picture—one that technology, until recently, managed to avoid: the dimension of ethics, of justice. These are not technical words. They are not precisely measurable. They cannot be optimized with a formula. And yet they suddenly become central.

In the Essex case, the facial recognition system was not “wrong” in the traditional sense. It was accurate. It worked. It produced results. But those results were not distributed evenly. They forced us to confront the possibility of accepting, in exchange for the efficiency that artificial intelligence can provide, something we perceive as unjust: inequality. And that is where a fracture opens. Because we realize that a system can be technically perfect and, at the same time, ethically problematic.

If we look deeper, another awareness emerges, perhaps even more uncomfortable. We have always thought—or perhaps hoped—that artificial intelligence was neutral. A sophisticated tool, certainly, but still just a tool. Something that executes, not interprets. That calculates, not judges. But AI does not emerge in a vacuum; it is built on data. And data is anything but neutral.

Data is history. It is the result of millions of decisions, behaviors, preferences, distortions. It is the way a society has evolved, with all its asymmetries. When an algorithm learns, it is not simply recognizing patterns. It is internalizing that history. And inevitably, it carries with it everything in that history that does not work. This means that the problem is not so much that AI can be “wrong.” The problem is that it can be consistent with a world that is itself not fair. And this is where the Essex case becomes interesting, because it stops being a technological issue and becomes a mirror. A mirror that reflects not what we would like to be, but what we have been. Perhaps what we still are.

There is another step, quieter but decisive. For years, bias has been considered a human dimension. Imperfect, certainly, but also limited. An individual can be biased, but can also change their mind. They can be corrected, challenged, even disproven. Human bias has a fundamental characteristic: it is unstable. But when it enters an algorithmic system, it changes its nature. It becomes stable, repeatable, and scalable. It is no longer a behaviour, but a structure. And at that point, it no longer appears as an occasional error, but as an operating mode. This is the real shift.

It is not that AI “makes mistakes.” It is that it can function perfectly according to logics we no longer question, precisely because they work. And when we realize the error, we can stop it, we can “switch it off.” We can do that now; who knows whether we will have the same possibility in the future. Because human error is visible. It has a face. It has a context. It can be told.

Algorithmic error, on the other hand, tends to disappear within the system. It becomes part of the flow. It makes no noise. And therefore it is harder to recognize. Linked to this theme, another aspect emerges strongly: responsibility. In the traditional world, the chain is clear. Someone makes a decision. Someone is accountable. Even when the system is complex, there is still a point of origin. With AI, this point becomes blurred. Who is responsible for a result produced by an algorithm? Who wrote the code? Who selected the data? Who decided to use it? Who benefits from it? The most honest answer is: everyone. And at the same time, no one directly.

A diffuse, diluted responsibility emerges; an “ethical DeFi” in which everyone can, in a sense, disengage, hide. It does not take much effort to recall moments when entire populations passively shared terrible, wrong decisions, simply telling themselves it was not their fault. And this is one of the most profound transformations we are experiencing. Because technology does not only change what we do. It also changes how we distribute responsibility. At this point, a question becomes inevitable. What kind of systems do we want to build? Because we can no longer limit ourselves to asking whether they work.

We must begin to ask how they work and, above all, for whom they work better. It is a question without a simple answer. Because optimizing a system often means reinforcing what already exists. And what exists is not always fair. Correcting a system, on the other hand, means introducing intentionality. It means deciding that some outcomes are more acceptable than others. And this brings technology into a territory that did not belong to it: that of value-based choices.

For a long time we believed that ethics could come later. First you build, then you regulate. First, you innovate, then you correct. With AI, this approach shows all its limits. Because the impact is no longer deferred in time. It is immediate. A system is implemented and produces effects instantly, at scale. There is no longer a “safe” phase between innovation and consequences. And this forces us to rethink the process. Ethics can no longer be an add-on; it must become part of the design. Not as an external constraint, but as a structural element.

If we widen our perspective, we realize that the Essex case is only an early signal. The same issue will emerge—and is already emerging—in many other fields. Every time a system makes decisions based on data, it carries this question with it. It is no longer just a matter of precision; it is a matter of balance. And perhaps this is the most interesting point.

For years we feared that artificial intelligence would become too autonomous, too distant from humans, too difficult to control. Today we begin to glimpse a different risk. Not that AI will drift away from us, but that it will resemble us too closely. That it will reflect with precision what we have been, without the uniquely human ability to question that reflection. Because an algorithm has no doubts. It does not pause to ask whether a result is right. So, the point is not to stop technology. The point is to understand that, at this stage, we can no longer treat it as a neutral tool. Every system we build embeds a vision, even when we do not state it. And every time we automate a decision, we are choosing which parts of our history to carry into the future. This is not a technical choice. It is a cultural one.

If we want to preserve our ability to make mistakes, to reflect on them and overcome them; if we still want to give value to our capacity to be different, original, “anomalous,” then we must soon confront the issue of algorithmic ethics: we cannot escape this step. And we must do it quickly: we must give a soul to the machine, before the machine takes it away from us.