Generative AI is a prime example of the dual-use dilemma; it can be harnessed to drive innovation and solve complex problems, but it also holds the potential to be weaponized for malicious purposes such as creating misinformation, automating cyberattacks, or generating sophisticated malware.
A recent simulation by Hoxhunt — a company focused on human risk management and cybersecurity training — demonstrated how generative AI can significantly increase the effectiveness and scale of phishing attacks by crafting highly personalized and convincing messages in seconds. Their generative model was able to outperform human red teams by almost 25%.
These attacks are executed by a wide range of threat actors with a diverse set of capabilities and motivations. Readily available tools lower the barrier for entry to engage in phishing. The stolen data is then often sold through dark web market places. The primary deliverables of this research are as follows:
Suggested Reading:
Supervisors: Nasim Nezhadsistani, Andy Aidoo
back to the main page