Home Bots in Society Council of Europe creates tool to assess impact of AI on human rights

Council of Europe creates tool to assess impact of AI on human rights

by Pieter Werner

The Council of Europe’s Committee on Artificial Intelligence (CAI) has introduced the HUDERIA Methodology, a structured tool designed to assess the risks and impacts of artificial intelligence systems on human rights, democracy, and the rule of law. This methodology aligns with the Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law, the first legally binding international treaty in this domain.

The HUDERIA Methodology provides both public and private entities with guidance to identify and address potential harms from AI systems throughout their lifecycle.

The methodology emphasizes the contextual nature of AI impacts, analyzing how technical systems interact with societal structures. It includes provisions for creating risk mitigation plans to address identified issues, such as adjusting algorithms or implementing oversight when biases or other risks are detected. It also mandates periodic reassessments to ensure continued compliance with human rights and safety standards as technologies and contexts evolve.

Adopted during the CAI’s 12th plenary session in November 2024, HUDERIA is part of broader efforts to operationalize the Framework Convention on AI, which was opened for signature in September 2024. The convention aims to ensure that AI activities respect human rights and democratic principles while supporting innovation. HUDERIA will be further supported in 2025 by a complementary HUDERIA Model, offering additional resources and recommendations to enhance its application.

You can find the framework here

 

Misschien vind je deze berichten ook interessant