Home

Workshop on exploring AI Safety through a sociotechnical lens to foster a Public Interest AI ecosystem led by Roel Dobbe at Delft University of Technology and Nitin Sawhney, Uniarts Research Institute, along with Lauren Pollinger, Jacqueline Kernahan and Taylor Stone at TU Delft and organized by the Global AI Policy Research Network on November 18, 2025.

The session is part of the Partners Day Program hosted at the FARI Brussels Conference 2025 on π—”π—œ, π—₯π—Όπ—―π—Όπ˜π˜€, & π—¨π˜€: π—Ÿπ—Άπ˜ƒπ—Άπ—»π—΄ π˜„π—Άπ˜π—΅ π—œπ—»π˜π—²π—Ήπ—Ήπ—Άπ—΄π—²π—»π˜ π—”π—΄π—²π—»π˜π˜€? which examines how AI systems and robotic agents are shaping our future and what it means to live alongside them in a rapidly evolving world.

AI Safety has become the focus of recent deliberations and reports, spurred in part by the rapid uptake of generative AI models and robotic agents with greater autonomy and less predictable outcomes. However, AI Safety is often framed in the narrow technical context of system development, testing, and validation, rather than the sociotechnical and institutional contexts that shape – and are shaped by – systems and agents deployed in the real world. Furthermore, there is a growing challenge in how Public Interest organizations and sectors increasingly rely on AI, while most AI Safety work happens within an industrial context. A public approach is needed to clarify what β€˜Public Interest AI’ entails; how safety is defined and operationalised within this context; and, how systems and agents can be designed and governed to uphold values such as autonomy and sovereignty in the hands of public actors.

This workshop explored how notions of AI Safety can be reframed from a sociotechnical systems perspective to inform actionable research, design, policies and practices in the public interest. This builds on recent research and community building efforts, including a policy brief by the Sociotechnical AI Systems Lab (AISyLab) at TU Delft, outcomes of deliberations at the Global AI Policy Research Summit 2025, and the Roadmap for AI Policy Research, a collaborative effort by participants of the AI Policy Summit 2024, that outlines key priorities for advancing research that strengthens AI governance and human-centered AI.

The half-day interactive workshop was anchored by presentations from 2-3 researchers and practitioners, providing relevant cases for deliberation. Participants are expected to share their own expertise and experiences, while contributing to rethinking and reframing issues from a sociotechnical systems perspective. The outcomes of this workshop will inform a set of policy recommendations and a research and action agenda to work towards a Public Interest AI Safety Ecosystem.

Workshop Program Schedule
Welcome and opening for key workshop themes (10 min)
Expert Presentations (45 min)
QA + Discussion with participants (10 min)
Break (15 min)
Closing, Summary and Next Steps (10 min)
Workshop: Examining Case Studies and Participant Deliberations (45 min)
Presenting workshop outcomes – 3 groups (15 min)

The Global AI Policy Research Network is a community of practice that serves as a platform for researchers and practitioners to advance responsible AI policy research and actionable strategies. A core objective of the network is to inform global approaches to AI governance by sharing best practices and fostering collaborative AI policy development. The network was established following the inaugural 2024 Summit in Stockholm; a second Summit took place in Delft in November 2025.

More details on the workshop: https://www.eventbrite.be/e/fari2025-rethinking-ai-safety-in-the-public-interest-tickets-1776421678059?aff=oddtdtcreator

This event is held as part of the FARI Brussels Conference 2025 under the theme β€œAI, Robots, and Us: Living with Intelligent Agents?” – Partners Day.