A Threat Modeling Approach for AI Security Systems

State: Assigned to Jamo Sharif

The usage of Artificial Intelligence (AI) with various methods (Machine Learning, Deep Learning etc.) has been shown to solve problems in the cybersecurity field that were not considered to be effectively solvable before. For example, the detection of malware has mainly relied on static analysis (i.e., checking the contents of a malware artifact), which can be easily obfuscated by changing in the source code. Here, behavior-based approaches leveragin ML were able to detect more generic behavior of malicious software, even when the source code is modified [1]. While a large share of companies have already moved beyond research into the adoption of an AI system for their Information Security Program, there are still clear challenges implicit [2]. As identified by a recent study [3], an inappropriate threat model is one of the most prevalent pitfalls of applying AI to the security field. For example, an AI system may be able to detect a security breach in a server. However, it may not be clear if the threat considered by the system allows safe operation for the AI system's activities (unfalsified data collection, model evaluation, reporting).

This thesis involves the development of a threat modeling approach, which would allow system implementors to model the (hostile) execution environment of the AI system and highlight potential threats to the system. As such, existing security guidelines [4] could be leveraged, transforming them into a threat modeling methodology. To create a prototypical implementation of the approach, CoReTM, a visual threat modeling tool can be extended [5].

[1] O. A. Aslan, R. Samet, “A Comprehensive Review on Malware Detection Approaches,” IEEE Access, vol. 8, pp. 6249–6271, 2020
[2] IBM: "AI and automation for cybersecurity", Available online
[3] D. Arp, E. Quiring, F. Pendlebury, A. Warnecke, F. Pierazzi, C. Wressnegger, L. Cavallaro, K. Rieck: "Dos and Don'ts of Machine Learning in Computer Security", 31st USENIX Security Symposium (USENIX Security 22), Boston, MA, USA, 2022
[4] OWASP: "OWASP AI Security and Privacy Guide", Available online
[5] J. von der Assen, M. F. Franco, C. Killer, E. J. Scheid, B. Stiller: CoReTM: An Approach Enabling Cross-Functional Collaborative Threat Modeling; IEEE International Conference on Cyber Security and Resilience, Virtually, Europe, July 2022, pp. 1–8. Available Online

30% Design, 60% Implementation, 10% Documentation
Knowledge or Interest in AI Security

Supervisors: Jan von der Assen

back to the main page