The quick answer is that, like the rest of society, Artificial Intelligence (AI) can also discriminate against us because of our age. It actually discriminates against us for many other things, but age seems to be, once again, an easy aspect on which to discriminate. And, as I have commented other times, socially accepted.
Being such a hot topic (I mean AI, because I am very much afraid that ageism receives much less attention), there are numerous scientific studies that document age bias both in classic algorithms and in generative models. By generative models we mean chatbots (those bots you talk to in order to ask your size, for example, when shopping online) or those that lead to the generation of images, but also those that apply personnel selection or medical diagnosis systems. That is, models and systems that are increasingly common in our daily lives and on which, in many cases, we depend without having an alternative. Everyday processes into which we have incorporated an agent that discriminates against us. We wanted help and found ourselves… with one more problem. Artificial intelligence may be an advance for certain things, but it brings problems with it, indeed.
It is worth, however, emphasizing something: AI is not “born” ageist, nor does it discriminate out of its own intention, it does not discriminate “just because,” but rather as the result of learning. AI learns to discriminate because… it is what it sees, what it knows. It reproduces the ageism present in the data that feed it and on which it is based, those that are in the design of the systems and in how they are implemented. AI, let us remember, does not invent anything; it only repeats (which is why I do not recommend my students use it, among other things). Thus, if its database is the world that exists, and this world is ageist, sexist… then what results could we expect? AI has not “invented” a discrimination but has amplified and reiterated the one that already exists.
So, if it already exists… why is AI ageism a specific problem to which we should pay attention? Because it is not about one person in a certain place, carrying out a certain task (or many people, many places, many tasks) having a bias; the problem is that this bias is scaled up and repeated thousands of times a day, without allowing room for review or second thoughts, and this happens in decisions that affect access to employment, services, information, or even health care.
But what does it mean that AI is ageist? How can something intangible be ageist? Isn’t it enough to close the computer and that’s it? Algorithmic ageism occurs when an AI system represents older people through stereotypes (frailty, dependence, illness, isolation), when it makes them invisible (they do not appear in data or examples), but also when it responds worse when they are the ones using it (for example, more errors in voice recognition in those horrible automated answering systems that handle us when we call certain companies), when it excludes them as users because of inaccessible designs or, one of the most terrible cases, when it makes discriminatory decisions based on age. In this regard, in case it is of interest, sociologist Justyna Stypińska explains that discrimination does not appear only in the final result of the system, but also in the data with which it learns, in design decisions, and in the very invisibility of older people in the technological ecosystem.
Scientific evidence tells us that the most common thing is to find stereotyped images of old age in generative AI. A recent longitudinal study (by Martens et al., 2025) analyzed 164 images generated by DALL·E about older people and found that, after one year, traits of digital ageism were still appearing, with the association of older people with frailty, decline, and neutral or negative emotions predominating (as if age took away our ability to feel), in addition to the already usual predominance of white people (to no one’s surprise). What is most worrying, I think, is not only the bias itself, but its persistence: far from correcting itself over time, the system continued reproducing practically the same negative social imaginaries about old age. That is, the technology was not “learning” to better represent the diversity of aging but rather continued giving us back the same stereotyped view that already exists in society.
Recent work on language models (what are known as chatbots) shows that their responses also reflect age biases. Some studies specifically analyze that “benevolent ageism” we have already talked about on other occasions: a form of apparently positive paternalism (“older people need help,” “they are less adaptable”) that, in reality, reinforces stereotypes. It does not appear in an explicit or hostile way, but rather appears dressed up in empathy, while infantilizing older people. In addition, it considers them vulnerable (because of their age, not because of other issues) and assumes they are technologically incapable. Some examples from everyday life could be a banking chatbot that, upon detecting that the person is older, automatically offers “simplified” instructions without asking what they need; a virtual assistant that assumes they do not know how to use an app; or a system that recommends content related to dependence, health, or care simply because it interprets that the user is older. It is not “insulting,” but it is constructing a very concrete idea of who that person is and of what they supposedly can or cannot do.
It seems almost funny to me to think about how AI assumes less knowledge on the part of older people. I imagine AI “thinking” about itself, “I am too sophisticated for older people to understand me.” In reality, it is nothing more than the technological translation of (very) human prejudices.
In high-impact areas, such as employment and health, the risks are even greater. For example, some companies use résumé-screening systems to filter applications. These systems can penalize long trajectories, old graduation dates, or profiles with a lot of experience, thus making senior profiles invisible. As if it were not already difficult enough to access employment at certain ages because of existing prejudices, here the problem is even deeper: it is not that the person hiring does not want you because of your age; it is that they do not even get to know that you exist.
I am especially concerned about health systems, where models “govern” that are trained on populations in which older people are underrepresented, which means that the tool does not really have knowledge of how health, morbidity, and age interact. Something similar happens with facial and emotional recognition. The result is that they may work worse with older people.
We are not talking here, therefore, about symbolic issues, about discourse, but rather that algorithmic discrimination raises unequal access to rights and services. And here lies, perhaps, the most important issue: the literature and different specialists insist that this is not a simple technical error or a one-time malfunction of the algorithm. Rather, AI acts as an amplified mirror of preexisting social structures. If society normalizes ageism, the data incorporate it; if the data incorporate it, AI learns it; and, by learning it, reproduces it at scale and with an appearance of neutrality.