EXPLORING ARTIFICIAL INTELLIGENCE’S IMPACT ON VULNERABLE GROUPS THROUGH LEGAL CASES
- Elisa Simó Soler
- Jan 9
- 2 min read
Updated: Feb 1
Ana Montesinos García
Elisa Simó Soler
Artificial Intelligence (AI) is reshaping many aspects of modern life, bringing significant benefits but also notable risks, particularly to vulnerable groups. Understanding the criteria applied by courts and data protection authorities in cases brought before them, as well as the rights allegedly infringed and the parties involved, provides valuable insights into how these risks are being addressed. This evolving field of study is expanding in parallel with the growing number of cases requiring resolution by judicial or data protection authorities. For this reason, subgroup 1.2 "Inventory of comparative judgments and rulings" of the Google cAIre project, led by OdiseIA, has prepared an initial study, the final version of which will be available in 2025, to examine this issue in greater depth.
The study identifies vulnerabilities, risks to fundamental rights and the safeguards needed to ensure that the use of AI complies with the principles outlined by the Organisation for Economic Co-operation and Development (OECD). These include human-centred AI, transparency and understandability, robustness and security, accountability, respect for human rights, and inclusive and impartial implementation. The analysis covers decisions made between 2013 and 2024 in Europe, the Americas and other regions, categorising cases by affected groups and using an intersectional approach to capture the complexity of overlapping vulnerabilities in AI-related contexts.
The report highlights the critical importance of transparency and algorithmic accountability, inclusive AI design that avoids perpetuating systemic biases, and the establishment of robust legal frameworks to ensure accountability. It also highlights the interconnectedness of these principles, noting that the fulfilment or violation of one principle often impacts on the others, creating cascading effects.
This study provides an essential resource for understanding the intersection of AI and legal considerations for vulnerable groups. By addressing gaps and implementing safeguards, policymakers, organisations and developers can work towards more equitable AI systems. These efforts are critical to maximising the potential of AI while protecting fundamental rights.
Subgroup members: Ana Montesinos García, Elisa Simó Soler, Juan Carlos Hernández Peña, Gabriele Vestri, and Iratxe De Anda Rodríguez.
Comments