02/05/2026

Technology That Accompanies or Technology That Replaces

n

In long-lived societies, technology can strengthen autonomy and improve care. But it can also become an efficient way to reduce human presence. The question is no longer what technology can do, but what it should not replace.

Accompanying Is Not Replacing

In long-lived societies, technology has stopped being a complement and has become part of the everyday environment of care. Home sensors, telecare, health apps, algorithms that anticipate risks, or platforms that organize appointments are already part of the lives of many people.

The promise is familiar: more autonomy, more security, more efficiency. But the truly important question is another one: what are we willing to delegate, and what are we not.

Because automating is not only about doing a task faster. It is also about deciding which functions we consider dispensable and what place we give to the human relationship within care.

To accompany means expanding capabilities without moving the person away from the center. To replace means swapping human presence for technical functionality. The difference seems obvious, but in practice it becomes less clear. A system that reminds someone to take medication can strengthen autonomy. But if that same system is used to justify fewer visits, less conversation, or less contact, we are no longer dealing with support: we are dealing with substitution.

The problem is not technology itself. It is the judgment with which it is incorporated into the care model.

When Technology Improves Care

At its best, automation makes it possible to anticipate falls, detect changes in routine, monitor relevant signals, optimize resources, and avoid unnecessary trips. All of that can be valuable, especially in contexts of pressure on health and social services.

But its most important contribution is not technical; it is human: freeing up time.

That time can be translated into conversation, accompaniment, listening, and real attention—into what gives density and meaning to care. Automating tasks only makes sense if it allows us to strengthen the human dimension of the relationship, rather than thinning it out until it becomes unrecognizable.

Good technology does not push the person out of the community or turn care into a cold and automatic process. It facilitates it, makes it more sustainable, and, at its best, makes it possible to devote more energy to what no machine can offer on its own.

When Efficiency Displaces the Relationship

Risk appears when automation stops being a means and becomes a functional substitute for care. Then a visit is replaced by a notification, dialogue by a form, and relationship by an indicator.

The system gains efficiency, but it loses the capacity to recognize the person. And that cost is not always well measured. It shows up as loneliness, as a sense of irrelevance, as an impersonal experience. Care stops being felt as care and begins to look like remote management.

In longevity contexts, this displacement can consolidate easily if there is no clear criterion. It is enough for budget pressure, professional shortages, or a logic of scale to impose a silent conclusion: for certain people, more distant attention is sufficient.

That is the true turning point.

Social Robots: Company or Simulacrum

Social robots illustrate this tension well. They can help maintain routines, support mild cognitive functions, or facilitate basic interaction. In certain contexts, they can be useful.

But they can also become a perfect excuse: “they already have company.”

Here we must be precise. A simulacrum of relationship cannot replace a real relationship. If a technological tool complements human accompaniment, it can add value. If it replaces it, it impoverishes the result and introduces subtle forms of isolation and even infantilization.

The question is not whether the machine responds or speaks. The question is whether the person maintains meaningful human bonds, or whether they are pushed—politely and with a good interface—into an administered loneliness.

The Risk of Silent Inequality

Automation can also introduce a form of discrimination that is almost invisible. If certain people, because of age, systematically receive remote attention while others continue to access in-person contact, we are not dealing with a simple organizational innovation.

We are dealing with a difference in the quality of care.

The implicit criterion is unsettling: “for that age, this is enough.” That lowered threshold is rarely stated openly, but it can end up organizing entire decisions. In that way, technology does not correct ageism; it modernizes it.

Longevity should not translate into colder services or lower standards. It should force us, precisely, to design services that are smarter, more sensitive, and more human.

Understanding and Being Able to Choose

Responsible automation requires at least two basic guarantees. The first is understanding what technology is used, what data it collects, and how it influences decisions. The second is choosing when to accept that technological mediation and when to require human presence.

Without understanding, the system becomes opaque. Without choice, it becomes paternalistic.

In long-lived societies, care cannot become a process that decides for the person “for their own good” without allowing them to understand or intervene. Dignity does not depend only on receiving attention. It also depends on preserving the ability to judge how that attention is received.

The Underlying Question

The real debate is not technological. It is institutional, social, and moral.

What care model is funded. What is considered success. What balance is established between efficiency and dignity. What things can be delegated without degrading the human experience, and what should never be outsourced to an automatic system.

Technology that accompanies is designed with explicit criteria, continuous evaluation, and an orientation toward outcomes that matter to people. Technology that replaces usually emerges gradually, driven by pressure on systems and legitimized by the rhetoric of efficiency.

It does not usually present itself as abandonment. It presents itself as innovation. And that is where part of the problem lies.

A Collective Decision

Long-lived societies face a choice that is not rhetorical. They can move toward an efficient, scalable, monitored model, but one that becomes increasingly depersonalized. Or they can use technology to reorganize resources and strengthen autonomy, bonds, and recognition.

The difference will not depend only on the innovation available. It will depend on the judgment with which it is applied and on the kind of society we want to sustain.

Because there is something that cannot be automated without impoverishing the result: presence.


If you could choose, would you prefer technology that takes care of you… or a community that accompanies you?