Home

๐—ฆ๐—ฎ๐—ป๐—ฑ๐—ฏ๐—ผ๐˜…๐—ถ๐—ป๐—ด ๐—ฅ๐—ถ๐˜€๐—ธ ๐—ฎ๐—ป๐—ฑ ๐—ฃ๐—ฎ๐—ฟ๐˜๐—ถ๐—ฐ๐—ถ๐—ฝ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—ณ๐—ผ๐—ฟ ๐—ฅ๐—ฒ๐˜€๐—ฝ๐—ผ๐—ป๐˜€๐—ถ๐—ฏ๐—น๐—ฒ ๐—”๐—œ ๐—ถ๐—ป ๐˜๐—ต๐—ฒ ๐—ฃ๐˜‚๐—ฏ๐—น๐—ถ๐—ฐ ๐—œ๐—ป๐˜๐—ฒ๐—ฟ๐—ฒ๐˜€๐˜?

Invited research talk hosted by Prof. Roel Dobbe at the Sociotechnical AI Systems Lab, Delft University of Technology on November 12, 2025 from 2-4pm.

๐—”๐—ฏ๐˜€๐˜๐—ฟ๐—ฎ๐—ฐ๐˜: The public sector has been increasingly embracing algorithmic decision-making, machine learning and data-centric infrastructures for many essential and often high-risk public services. With the rapid emergence of LLMs and Generative AI systems there has been a renewed thrust to incorporate AI-based models and conversational AI interactions to improve digital services for citizens, rapid decision-making and reducing costs, without critically contending with the greater risks they may pose for inaccuracy, misinformation, breach of privacy or marginalization of its users. As such โ€˜Public AI Servicesโ€™ become more prevalent and affect citizensโ€™ lived experiences, we must critically question their social, political and ethical implications to examine the rights, risks and responsibilities for both the providers and recipients of such services, particularly the most vulnerable in society. Diverse discourses around the EU AI Act and other AI regulatory frameworks offer a timely opportunity to examine the emerging public values being incorporated, while engaging multi-stakeholder and citizen participation in shaping them.

In this talk, I rethink how we go beyond the technical notions of AI safety to engage the sociotechnical realm to devise more inclusive, trustworthy and responsible AI practices in the pubic interest. I discuss participatory design of conversational AI systems for supporting collaborative migrant counseling services with municipalities in Finland. We consider the role of AI Regulatory Sandboxes to support responsible development and validation of emerging AI systems in conjunction with stakeholders and regulators in the AI lifecycle. Such sandboxes can foster experimentation, co-learning and multi-stakeholder participation, particularly in high-risk domains. However, participatory design and sandboxing AI have crucial limitations we must address.

The many exceptions in the EU AI Act permit the use of AI in policing, surveillance and military contexts, and lack enforceable provisions for potential civil and human rights violations. How should researchers, scholars, government actors and civil society understand their implications globally to devise critical policies and practices that mitigate societal harms today? We need to rethink how we tackle notions of AI safety, risk, inclusion and responsible AI in the wider public interest, as an action agenda for future research and pragmatic societal outcomes. The Global AI Policy Research Summit 2025 being held at TU Delft seeks to advance responsible AI governance and evidence-based policies through collaborative research and practice.

Video recording posted on Public Spaces (talk begins 4 mins into the video)

The talk was hosted as a hybrid event at TU Delft and online.
Location: Hall A, TU Delft | TPM, Building 31, Jaffalaan 5, Delft
https://studio.publicspaces.net/b/roe-vt3-gqj-vwi