Computational Cybersecurity in Compromised Environments (C3E)
The 2024 Mid-Year Event will be in Annapolis, Maryland on May 9.

The 2024 Computational Cybersecurity in Compromised Environments (C3E) Mid-Year event will be held on May 9th at the Historic Inns of Annapolis in Maryland. The purpose of this meeting, as with all C3E mid-year events, is designed to sharpen the focus of activities for the Fall workshop.

The themes for the 2024 C3E meeting are Sociotechnical Aspects of Human-AI Systems and Generative AI for Specifications.

Sociotechnical Aspects of Human-AI Systems

This track intends to approach the interaction of Humans and Artificial Intelligence/Machine Learning as a sociotechnical system: An interdependent system where the unique roles and authorities of each agent in the system are inseparably coupled. From the viewpoint that state-of-the-art solutions require the expertise of multidisciplinary perspectives, contributing fields might include social psychology, behavioral economics and decision science, philosophy, linguistics, and cognitive, computer and data sciences, to name a few.

This workshop will bring together technologists and social/behavioral researchers to share knowledge and draft recommendations to identify areas of future research. We intend to foster collaborative efforts between areas of expertise. Output from this workshop might include a research framework or roadmap for this field of research and development.

Questions for consideration:

  • How might theories of bounded rationality and/or distributed cognition contribute to trustworthy AI/ML solutions?
  • Can cognitive modeling contribute to AI/ML methods (e.g., Inverse Reinforcement Learning)?
  • Can human decision-making be modeled as a stochastic system?
    • How might this improve system performance, cybersecurity, and explainability?
    • How might this cause unintentional consequences?
  • How might agents (human and artificial) gain a shared theory of mind?
    • Would this be an informative framework for future research?
    • What are the current limitations?
  • What are the key social, psychological, and philosophical factors that must be considered when developing and evaluating AI/ML techniques?

Generative AI for Specifications

System engineering involves a delicate interplay between three tasks: specification (what a system should do); implementation (what a system actually does); and verification (determining whether they agree). Novel generative AI technologies have emerged that can assist with implementation and verification-e.g., respectively, Microsoft's CoPilot, and formal verification tools developed under DARPA PEARLS. The focus of this track will center on exploring the use of generative technologies to assist with development of a specification (i.e., a clear description of desired behavior and structure of a system), capturing designer/engineer/developer intent.

Questions for consideration might include:

  • What multimodal inputs might be available to represent the current state of the system specification?
  • What will be the dialogue between the human and machine?
  • How will effectiveness of generative technologies be maximized through a set of prompts?
  • What methods of prompt generation to ensure user alignment during specification refinement will be appropriate/effective?
  • Given a likely iterative human-machine teaming loop, what might be meant by a partially constructed specification? What forms of input will the user provide to further refine the specification?
  • How will generatively constructed candidate specifications be evaluated?

Attendance at the May 9th event is by invitation only and limited because of time and space constraints. For more information please contact the organizers: c3e@cps-vo.org

Sponsors     
Glenn Lilly (NSA)     
Brad Martin (DARPA)

Organization     
Andy Caufield (CPVI)
Katie Dey (Vanderbilt University)     
Dan Wolf (CPVI)

The workshop is sponsored by the Special Cyber Operations Research and Engineering (SCORE) Interagency Working Group.