Friday, April 2nd, 2022- Submissions Due Friday, April 30th, 2022- Notification
- Friday, June 10th, 2022 - Camera-ready Due
- June 14, 2022 - Workshop Date
Fielded applications of planning must interact with a world that can be inexplicably hostile in unexpected ways. Game theoretic means of handling adversarial environments are both computationally expensive and impose their own assumptions which can be violated or exploited. This makes understanding how planning techniques react to provocations and false information a critical requirement for planning system deployments. The workshop will look at both sides of the problem — from characterizing attacks and vulnerabilities to proposed methods for detecting and mitigating threats.
Topics of interest include:
- Use cases and techniques for effective deception against planning systems
- Techniques for identifying and reacting to adversarial conditions
- Theoretical foundations and results for deception and robustness
- Analysis of robustness and stability of planning frameworks, including planning engines and corresponding knowledge models
- Validation and Verification of planning systems deployed in adversarial environment
- Relationships between game theory, automated planning, and applications of theory to planning system deployment
- Adversary modeling, both for evaluating the effect of deception on an actor and identifying adversarial inputs
- Applications of planning in adversarial conditions, e.g., cybersecurity operations
How cyberattacks can facilitate deceptive behaviour in autonomous systems
Autonomous systems are deployed to manage dynamic, complex systems which include multiple human and machine actors. One significant application relevant to this talk is that of vehicle control. In such systems, there is a great opportunity for actors to take specific actions to initiate deceptive behaviours. This could be to deliberately impact autonomous systems to achieve the desired outcome, which could be malicious or for personal gain. There are many different types of deceptive behaviour, and in this talk, a classification framework is presented that defines the multiple deception types, providing examples of their impact on managing traffic infrastructure. For an adversary actor to achieve deceptive behaviours, the actor must establish the technical means to have the desired impact on the autonomous system. This talk specifically investigates how different types of cyberattacks can be performed, and how even an easy to execute attack can facilitate significant deceptive behaviours. Finally, the classification framework is extended to define how deceptive behaviours can result from a cyberattack, and how their impact can be measured and used to prioritise their mitigation.
Bio: Simon Parkinson is an Associate Professor of Cyber Security at the University of Huddersfield. He has been exploring topics on the interface between AI and Cyber Security for over a decade and has secured funds to undertake research and knowledge exchange from sources such as the UK’s Engineering and Physical Sciences Research Council, Innovate UK, and Defence Science and Technology Laboratory. One such example focused on understanding Cyber Security threats facing Connected and Autonomous Vehicles. He has published on this topic in leading journals and conferences. He has served as track chair and organised workshops at the ICAPS conference and is a serving member of an advisory group to regional law enforcement. He has delivered invited seminars for the UK Government on connected and autonomous vehicles and Cyber Security and global manufacturing organisations such as Alba Aluminium, Bahrain.
Autonomous Cyber Deception -- Foundations and Practices
The ultimate objective of cyber deception is to mislead adversaries from reaching the ``true" targets and at the same time to engage them to learn new attack tactics and techniques. Most of the existing deception solutions are operationally expensive, yet easily discoverable by attackers because they lack dynamism and adaptively. Autonomous cyber deception provides highly adaptive and embedded deception that can dynamically create and orchestrate honey resources appropriately to cope with dynamic behavior of the Advanced Persistent Threats (APTs).
In this talk, we will presents our research experience in developing the theoretical foundation, and prototype implementation and evaluation of optimal planning for autonomous cyber deception. The goal is enable cyber deception agents that reside in any production systems to automatically create and orchestrate the deception ploys to steer and mislead the malware or APT attacks to the desired goal without human interaction. The deception ploys are dynamically composed based on the deception planning while ensuring safe yet fast deployment and orchestration of deceptive course-of-actions. We will show the evaluation of the system to deceive APT information stealers, ransomware, and Remote Access Trojans (RAT) within a few seconds and with minimum cost.
Bio:Dr. Al-Shaer is a Distinguished Research Fellow at Institute of Software Research (ISR) in the School of Computer Science and Faculty Member of CyLab at Carnegie Mellon University. Dr. Al-Shaer's key area of research is autonomous cyber defense, formal methods for security configuration, resilience of cyber and cyber-psychical, data analytics for cybersecurity, and cyber agility for deterrence and deception. Dr. Al-Shaer has edited/co-edited more than 9 books and book chapters, published more than 250 refereed articles. He was designated by the Department of Defense (DoD) as a Subject Matter Expert (SME) on security analytics and automation in 2011, Distinguished Career Professor in INI/CMU, IBM Faculty Award in 2012, and UNC Charlotte Faculty Research Award in 2013. Dr Al-Shaer was a general Chair, TPC chair and keynote speaker, and panelist in major conferences in this area including ACM CCS, IEEE IM 2007, IEEE POLICY 2008, and others.
June 14 — 14:00-19:00(UTC) 10:00-15:00(EDT) 00:00-05:00(Canberra)
Keynote: Simon Parkinson. How cyberattacks can facilitate deceptive behaviour in autonomous systems
|15:10||Using the Fast Downward System in CPCES
|15:30||Knowledge Reformulation and Deception as a Defense Against Automated Cyber Adversaries
Keynote: Ehab Al-Shaer. Autonomous Cyber Deception — Foundations and Practices
|17:00||Proposing an Architecture to Integrate Stochastic Game and Automated Planning Methods into a Comprehensive Framework: CHIP-GT
|17:20||NetStack: A Game Approach to Synthesizing Consistent Network Updates
|17:50||Evaluating the robustness of automated driving planners against adversarial influence
|18:10||Plan Critiquing for Assessing Adversarial Effects on Plans
|18:30||On the robustness of domain-independent planning engines: the impact of poorly-engineered knowledge
Submission Information (historical information)
Authors may submit technical papers of up to 8 pages (+1 for references). We also encourage the submission of short papers (4+1) focussing on preliminary results, and challenge papers (2 pages) to discuss open problems. All submissions should be anonymized and conform to the AAAI style template. The papers must be submitted in a PDF format via the EasyChair system. Submissions will be reviewed by at least two referees.
We welcome the presentation of relevant work published recently in archival conferences and journals. For these submissions, please submit a short note pointing to the work and justifying its relevance. If the work is still under review, attach an anonymized copy. Already published work will not be included in the proceedings.
Paper submission deadline: April 2nd, 2022 (AOE) Notification: April 30th, 2022
- Camera-ready paper submission: June 10th, 2022
- Lukas Chrpa, Czech Technical University, Czech Republic
- Mauro Vallati, University of Huddersfield, UK
- Andy Applebaum, Apple, USA
- Ron Alford, The MITRE Corporation, USA
- Mark Wilson, Independent, USA
- Simon Parkinson, University of Huddersfield, UK
- Saad Khan, University of Huddersfield, UK