Trinity College Dublin establishes AI Accountability Lab
A new research group designed to advance artificial intelligence accountability research launches today at Trinity College Dublin. The AI Accountability Lab (AIAL) will be led by Dr Abeba Birhane, research fellow in the Adapt Research Ireland centre at the School of Computer Science & Statistics. The Lab will focus on critical issues across broad topics such as the examination of opaque technological ecologies and the execution of audits on specific models and training datasets.
Dr Birhane said: “The AI Accountability Lab aims to foster transparency and accountability in the development and use of AI systems. And we have a broad and comprehensive view of AI accountability. This includes better understanding and critical scrutiny of the wider AI ecology – for example via systematic studies of possible corporate capture, to the evaluation of specific AI models, tools, and training datasets.”
The AIAL is supported by a grant of just under €1.5 million from three groups: the AI Collaborative, an Initiative of the Omidyar Group, Luminate, and the MacArthur Foundation.
AI technologies, despite their supposed potential, have been shown to encode and exacerbate existing societal norms and inequalities, disproportionately affecting vulnerable groups. In sectors such as healthcare, education, and law enforcement, deployment of AI technologies without thorough evaluation can not only have nuanced but catastrophic impact on individuals and groups but can also alter social fabrics. For example, in healthcare, a liver allocation algorithm used by the UK’s National Health Service (NHS) has been found to discriminate by age. No matter how iIl, patients under the age of 45 seem currently unable to receive a transplant, due to the predictive logic underlying the algorithm.
Additionally, incorporating AI algorithms without proper evaluation has a direct or implicit impact on people. For example, a decision support algorithm used by the Danish child protection services to aid child protection deployed without formal evaluation has been found to suffer from numerous issues, including information leakage, inconsistent risk scores, and age-based discrimination.
These few examples illustrate the need for transparency, accountability, and robust oversight of AI systems, which are central topics the AI Accountability Lab seeks to address through research and evidence-driven policy advocacy.
Prof John D Kelleher, Director of Adapt and Chair of Artificial Intelligence at Trinity, added: “We are proud to welcome the AI Accountability Lab to Adapt’s vibrant community of multidisciplinary experts, all dedicated to addressing the critical challenges and opportunities that technology presents. By integrating the AIAL within our ecosystem, we reaffirm our commitment to advancing AI solutions that are transparent, fair, and beneficial for society, industry, and government. With the support of Adapt’s collaborative environment, the Lab will be well positioned to drive impactful research that safeguards individuals, shapes policy, and ensures AI serves society responsibly.”
In its initial stages, the AIAL will leverage empirical evidence to inform evidence-driven policies; challenge and dismantle harmful technologies; hold responsible bodies accountable for adverse consequences of their technology; and pave the way for a future marked by just and equitable AI. The group’s research objectives include addressing structural inequities in AI deployment, examining power dynamics within AI policy-making, and advancing justice-driven audit standards for AI accountability.
The lab will also collaborate with research and policy organisations across Europe and Africa, such as Access Now, to strengthen international accountability measures and policy recommendations.
TechCentral Reporters
Subscribers 0
Fans 0
Followers 0
Followers