top of page
Diseño sin título (1).png

OdiseIA Blog

Follow our blog and stay up to date with all the advances and news in the world of artificial intelligence and its social and ethical impact.

Dr. Begoña G. Otero

AI for Good: Questioning the idea of human vulnerability

Updated: Oct 2

Author: Dr. Begoña G. Otero, member of OdiseIA and research member of the subgroup “AI for Good”, cAIre Research Project.


Introduction: Human vulnerability in times of AI technologies


We live in a world where AI systems can be used to predict our next move, suggest products and services we did not know we wanted, or even decide whether we get a loan or a medical treatment. Exciting, right? However, here is the flip side: what if this same technology is used to exploit our weaknesses, influences our decisions subtly, harms our integrity, or deepens societal inequalities? If you feel anxious about these possibilities, you are not alone.


The rapid advancement of AI technologies raises significant questions about human vulnerability. What does it mean to be vulnerable in the context of AI? How can we ensure that these technologies help rather than harm us? These questions are not just academic; they affect all of us in our daily lives.


The following lines will deepen on the concept of vulnerability; explain how it has evolved over time, and how different theories from different fields of knowledge can help us understand it better in the context of AI. Along the way, this blogpost will provide concrete examples to illustrate these theories and question whether the current understandings of human vulnerability are fit for purpose when considering AI technologies. As someone who researches the positive uses of AI to help vulnerable people, it is crucial to define what vulnerability means in the AI era before we can fully harness its potential for good.


The evolution of the concept of vulnerability


The idea of vulnerability is as old as humanity itself, but its definition has evolved significantly over time. Let us take a journey through its evolution to understand how it applies today, especially in the context of AI technologies.


Early understandings of vulnerability


Historically, vulnerability was often viewed as a static condition. For instance, in the early 1980s, political scientist Robert Goodin defined vulnerability as the "susceptibility to harm to one's interests." Goodin argued that those most vulnerable to our actions and neglect are the ones we owe the most responsibility to. This perspective linked vulnerability directly to responsibility and accountability​​.


Consider, for instance, elderly individuals dependent on caregivers. Goodin's theory would suggest that society has a heightened responsibility to protect these individuals because they are more susceptible to harm.


Shifting perspectives: Universal vulnerability


Over time, scholars began to see vulnerability as a condition of specific groups and a universal aspect of the human condition. Martha Fineman, a prominent legal theorist, introduced the concept of the "vulnerable subject." She argued that all humans are inherently vulnerable due to our physical bodies and dependence on social relationships and institutions. This view shifts the focus from protecting specific groups to recognizing vulnerability as a fundamental human trait​​.


Fineman's theory helps explain why people might feel vulnerable in different stages of life, such as during childhood, illness, or aging. It underscores the need for societal structures that support everyone, not just particular groups.


Contextual and relational vulnerability


More recent theories emphasize that vulnerability is not only a universal human trait but also highly contextual and relational. Ethicist Florencia Luna, for instance, proposed the idea of "layers (not labels) of vulnerability." According to Luna, vulnerability varies depending on an individual's status, time, and location. This layered approach allows for a more nuanced understanding of how different factors contribute to a person's vulnerability at various times​​.


For example, a pregnant woman might experience heightened vulnerability due to health risks and social expectations. However, her vulnerability is not static; it changes with her health, support systems, and socioeconomic status.


A comparative review of the most representative vulnerability theories across different fields of science can yield a number of lessons. Vulnerability is an inherent aspect of humanity, specific to living beings and relational. Power imbalances are central to its theoretical conceptualization, and there are two approaches to addressing it: addressing harm collectively or empowering individuals. Lastly, ethics and law often take a labeling approach to vulnerability, providing lists of vulnerable populations. Yet, as Luna mentioned, layering opens the door to a more intersectional approach and stresses its cumulative and transitory potential.


Current theories on vulnerability versus AI technologies: real-world examples


To better understand the impact of AI technologies on human vulnerability and whether existing approaches to this concept are enough, it is vital to explore at least some of the most relevant theories on vulnerability and support them with real-world examples. These theories offer a structure for comprehending vulnerability's different dimensions and contexts.


Goodin's vulnerability model


Robert Goodin's model highlights the idea that vulnerability is closely tied to dependency and responsibility. According to Goodin, those who are most vulnerable to our actions are the ones we are most responsible for​​.


In the context of AI, consider automated decision-making systems in healthcare. Patients relying on AI for diagnosis and treatment are vulnerable to errors and biases in the system. Healthcare providers, therefore, should bear significant responsibility for ensuring the accuracy and fairness of these AI systems to protect vulnerable patients. Yet, is this the best approach?


Fineman's vulnerable subject theory


Martha Fineman's theory posits that vulnerability is a universal human condition. She argues that our inherent physical and social dependencies make us all vulnerable, and society should structure itself to support everyone rather than just specific groups.


Social media platforms using AI algorithms and systems to moderate content can affect all users, especially when algorithms fail to recognize the nuances of human communication. This can lead to unjust censorship or the spread of harmful content, affecting everyone's vulnerability to misinformation and harassment. However, to what extent do our legal systems allow for allocating responsibility to these platforms? 


Luna's layers of vulnerability


Florencia Luna's concept of layers of vulnerability suggests that vulnerability is not static but varies depending on an individual's circumstances. This layered approach considers factors such as status, time, and location.


Gig economy workers using AI-driven platforms like ride-sharing apps may experience different layers of vulnerability. A driver might be more vulnerable during late-night shifts due to safety concerns and fluctuating demand. Their work environment, economic status, and the design of the AI platform influence their vulnerability. However, are these layers really considered when designing these type of business services? Is vulnerability even a factor the owners of these platforms consider? Should it be?


Kohn's critique and enhancements


Nina Kohn criticized Fineman's theory for its potential to lead to overly paternalistic and stigmatizing public policies. She suggested enhancing the theory to better uphold individual liberties while still addressing vulnerabilities. She advocates for a balanced approach that consider both protection and autonomy.


An example of her approach would be the case of AI surveillance systems being broadly implemented in public spaces for public safety and security purposes, which might infringe on individual privacy rights, making people feel constantly monitored and potentially leading to misuse of data. Furthermore, these systems might disproportionately target certain racial or ethnic minorities, exacerbating existing biases and leading to unfair treatment. Kohn's perspective would emphasize, among others, the need to (a) implement transparency obligations on how the technology is used, what data is collected, how it is stored and protected; (b) set solid oversight mechanisms to ensure that AI surveillance is used ethically and lawfully; and (c) restrict the scope of use to public areas or situations where it is genuinely necessary for public safety purposes. As a result, the implementation of an AI surveillance system in a public space might include a clear sign informing citizens about surveillance and data use, a dedicate website for public access to detailed information, regular independent audits, and mechanisms for individuals to provide feedback or file complaints. Such a balance between protection and autonomy is crucial in designing AI regulations that respect both security and liberty.


Mackenzie's potential and occurrent vulnerabilities


Philosopher Catriona Mackenzie introduced the distinction between potential or dispositional and occurrent vulnerabilities. Potential vulnerabilities refer to a state with a risk of harm, but such a risk is not currently concrete and tangible. It implies that certain conditions exist that could lead to harm, but they do not currently cause any immediate danger. On the other hand, occurrent vulnerability is a concrete and tangible state of vulnerability where the risk has become imminent or actual harm is occurring. This type of vulnerability necessitates immediate action to prevent or mitigate harm. Her distinction is significant because it helps identify whether vulnerabilities require immediate intervention or preventive measures to avoid future harm. It also allows for more targeted and effective responses to different types of vulnerabilities.


In this sense, think of AI systems used in predictive policing. Potential vulnerabilities might include the risk of specific communities being unfairly targeted based on predictive AI trained with historical data. Occurrent vulnerabilities take place when actual policing practices result in the over-policing and surveillance of these communities, leading to mistrust and harm. Another example of potential vulnerability could be an AI system designed for hiring processes using a dataset that comes from training data containing subtle biases, such as historical preferences for certain demographics over others. The use of such a system can lead to dispositional vulnerability by their users making biased decisions, holding potential for discrimination. In the case of occurrent vulnerabilities, one could consider an AI system designed to manage financial transactions, which detects and unusual pattern that indicates an ongoing cyber-attack. The system’s security protocols are being actively compromised, allowing for unauthorized access to sensitive financial data. The users of this AI system are occurrently vulnerable because the risk is actualized and in need of immediate action to mitigate harm.


Practical implications of these theories


Understanding these theories helps us recognize the multidimensional ways in which AI technologies can affect human vulnerability. For instance, identifying potential vulnerabilities can inform the proactive design of AI systems to mitigate risks before they materialize. Additionally, recognizing the layered nature of vulnerability ensures that AI applications consider the varying circumstances of different user groups.


Proposing a taxonomy for understanding vulnerability in the context of AI technologies


The previous examples demonstrate that it is essential to develop a comprehensive taxonomy to address the complex and multifaceted nature of vulnerability in the context of AI. This taxonomy should consider various factors contributing to human vulnerability and offer a structured approach to identifying and mitigating these vulnerabilities. Drawing on the theories discussed and the insights from the literature, here are a number of factors that, if consider, may help at better discovering human vulnerability in the context of AI technologies.


Demographic and socioeconomic conditions


Demographic and socioeconomic conditions play a significant role in determining vulnerability in the digital space. Certain groups may be more susceptible to harm due to age, gender, education level, income, or ethnicity. These factors can create disadvantages that are exacerbated by AI technologies. Older adults may struggle with AI-driven online banking services due to a lack of digital literacy, making them more vulnerable to fraud or errors.


Psychosocial characteristics


Psychosocial characteristics, including mental health, emotional state, and cognitive abilities, influence how individuals interact with AI systems and can affect their susceptibility to manipulation or harm by AI technologies. Individuals with mental health issues may be more vulnerable to AI algorithms that target them with ads for potentially harmful products, such as gambling or alcohol.


AI-related competencies and literacy


A user's knowledge and skills related to AI can significantly influence their vulnerability. Those with limited understanding of how AI works or how to navigate AI-driven systems may be at higher risk of exploitation or harm. Consumers who need help understanding how recommendation algorithms work may be more likely to fall for misleading product suggestions or biased information.


Contextual, relational, and situational factors


The context in which AI technologies are used can create situational vulnerabilities. These factors include the environment, time, and specific circumstances under which an AI system is employed. Gig economy workers using AI-driven platforms may face vulnerabilities related to job security and working conditions, which can vary significantly based on location and demand.


Power imbalances and information asymmetries


Power imbalances between AI developers and users, as well as information asymmetries, contribute to vulnerability. Users often lack the knowledge or resources to fully understand or challenge AI systems, placing them at a disadvantage. Social media platforms using AI to curate content can manipulate user behavior through algorithmic maximization of engagement without users fully understanding the extent of this influence.


Temporal factors


The timing and duration of AI interactions can influence vulnerability. Some vulnerabilities may be short-term and situational, while others can be long-lasting and chronic. Temporary financial vulnerability due to job loss can make individuals more susceptible to predatory AI-driven loan services.


Conclusion: Do we need to rethink what human vulnerability is in the context of AI?


The previous classification is a first attempt to tackle the question of vulnerability in the context of AI in a structured way. It is open to amendments and modifications in the course of this research project and it is open to external feedback. While this taxonomy may not solve the main problem of whether we need to rethink the concept of vulnerability in the context of AI technologies, it may help identify new situations and provide guidance to both innovators and policymakers in terms of inclusive design and risk assessment.


Still, an unanswered question remains: In light of the rapid evolution of AI technologies, are we in need of revisiting the legal conceptualization of human vulnerability at the international level? This legal question is the object of an independent research project that will soon materialize in a research paper.


In brief…to be continued.

0 comments

Comentários


bottom of page