Program Agenda

2022 NSF SafeTAI Workshop

Held virtually over Zoom
September 22 - 23, 2022


Zoom information has been sent to attendees via email; one Zoom link is used for both days, listed below without the passcode. If you have problems, please contact the organizers:

Zoom link:

Meeting ID: 941 2266 8447

Passcode: sent via email, contact organizers at if you have problems connecting

The workshop will involve about 5 hours of content each day, scheduled around 10am-4pm eastern to accommodate various time zones, with a couple keynotes and introductory short talks, followed by parallel breakout sessions on different topical areas of safety/trust in AI-enabled systems, particularly through formal methods/verification and theory lenses. These interactive breakouts will be used to brainstorm research challenges, directions, opportunities, etc., for the community, and to outline the resulting workshop report. Opportunities for networking, discussion with other attendees, etc., will also be possible during several scheduled breaks. The interactive program is modeled loosely like a Dagstuhl-style event and a prior NSF Workshop on Formal Methods for Security, and the workshop participants will provide input for the resulting workshop report during interactive breakout sessions. Somewhat like an NSF review panel, participants should please aim to treat the workshop as their primary activity on these dates if they participate, although we of course realize the event is during the academic year for many who may need to step out, etc.


9/22/2022Day 1: Introduction and Research Challenges
Time (Eastern) 
10:00am - 11:00am

Introduction and Attendee Introductions/Interests (Organizers); Breakout Overviews, Goals, and Topics

10:00am - 10:05am: Organizer Welcome

10:05am - 10:15am: NSF Welcome: Joydip (JD) Kundu, Deputy Assistant Director of NSF CISE

10:15am - 10:25am: Workshop Goals: Pavithra Prabhakar, Program Manager for CCF/SHF

10:25am - 11:00am: Attendee Introductions (1 Sliders/Quadchart; add your slide to the shared Google Slides sent via email)

11:00am - 12:15pmKeynote and Q&A
Artificial Intelligence: Do you trust it? 
Dr. Kathleen Fisher, DARPA I2O Director
12:15pm - 12:30pmBreak
12:30pm - 1:30pmKeynote and Q&A
On Audits, Algorithms and Accountability 
Inioluwa Deborah Raji, Mozilla Foundation and University of California, Berkeley
1:30pm - 2:00pmLunch break
2:00pm - 2:15pmBreakout Session Plans and Outlining Report Structure (Organizers)
2:15pm - 3:30pm

Breakout Session 1: Parallel breakout sessions of ~4-6 attendees each(~15 minutes discussion on backgrounds, ~45 minutes discussion on definitions, research/societal challenges/risks in safety and trust in AI, ~15 minutes summary statement and Working time on report topics, identifying any missing issues, challenges, research questions around safety and trust in AI)



3:30pm - 4:00pm

Summary of Breakouts: Reconvene as group: summary presentation to whole group from each breakout session chair and participants (~5 min each)


~4:00pmClose for day
9/23/2022Day 2: Possible Solutions and Research Directions
Time (Eastern) 
10:00am-10.15amIntroduction and Day Goals (Organizers)
10:15am-11:00amAttendee Introductions (with optional 1-Sliders/Quadcharts; add your slide to the shared Google Slides sent via email)
11:15am-11:30amBreakout Overviews, Goals, and Topics (Organizers)
11:30am-12:45pmBreakout Session 2:
Parallel breakout sessions of ~4-6 attendees each focusing on candidate solutions and research directions for safety and trust in AI
12:45pm-1:30pmLunch break
1:30pm-1:45pmSummary of Breakouts: Reconvene as group: summary presentation to whole group from each breakout session chair and participants (~5 min each)
1:45pm-3:00pmBreakout Session 3:
Parallel breakout discussions and working time on report topics, focusing on candidate solutions and research directions
3:15-3:45pmSummary of Breakouts: Reconvene as group: summary presentation to whole group from each breakout session chair and participants (~5 min each)
3:45pm-4:00pmConclusions and Next Steps (Organizers)


Kathleen Fisher, Keynote Address: Artificial Intelligence: Do you trust it?

Abstract: We have seen significant progress in AI over the last ten years, predominantly driven by dramatic advances in machine learning and particularly deep learning. Society is realizing the benefits across a wide range of application domains. However, within the military, the consequence of making a wrong decision based on AI could be catastrophic. And the DoD must defend against nation-state level adversaries with significant resources, the ability to create deception, and the desire to change our way of life. DARPA is funding research in trustworthy AI technologies and systems that can be trusted to perform as expected despite the efforts of sophisticated adversaries. In this presentation, I will discuss research efforts in AI systems that we can trust with our (and warfighters') lives and explore fundamental advances beyond statistical ML that appear promising toward reaching the goal of trustworthy AI.

Inioluwa Deborah Raji, Keynote Address: On Audits, Algorithms and Accountability

Abstract: As algorithmic deployments infiltrate our daily existence, it has become increasingly clear that in addition to the benefits they provide, these systems have also become a source of meaningful harm - significantly disrupting the lives of many real people. At the crux of these issues are biased and incorrect model outcomes that are hard to evaluate and stakeholders that are frustratingly difficult to hold accountable. As a result, policymakers and advocates are increasingly turning to audits as a method to accumulate concrete evidence for algorithmic harm and as a promising approach for accountability. Informed by important lessons from audit systems in other industries, this approach appears in many cases to be truly successful - some audits have already led to product updates or recalls, organizational changes and developments to regulation or standards.  However, difficulties in execution, oversight and impact threaten the credibility and effectiveness of these audits as well as throw into question how much we can rely on this intervention without first investing in the technical, legal and institutional design of a more mature audit ecosystem for algorithmic deployments.