Generative AI enabled Cybersecurity Operations
The 2019 and 2021 Federal Cybersecurity R&D Strategic Plans continued to highlight the needs and benefits on the intersect of AI/ML and cybersecurity. The potential use of generative AI, including Large Language Models (LLMs), for cybersecurity operations may be hindered by misconceptions of its capabilities and missed opportunities to properly advance it and incorporated into education and training. Professor Yang is leading his research team at RIT, in collaboration with Professor Pelletier at RIT and Professors Miller Raffaella and Borys in University of Rochester Warner School of Education, to develop a immersive professional learning program by leveraging the state-of-the-art Cyber Range at RIT.
His research team, including second year Ph.D. student Reza Fayyazi from Iran and first year Ph.D. student Oluyemi Amujo from Nigeria both part of the Electrical and Computer Engineering Ph.D. Program at RIT, is investigating efficient prompt reasoning and fine-tuning techniques for LLMs to assist various aspects of cyber-ops. The research findings will not only advance the fundamental understanding of proper uses and expectations of LLMs for cyber-ops, but also enable the incorporation of generative AIs into professional incident response training. Imagine the use of LLMs to summarize cyber threat intelligence and analysts actions to provide the incident response team a real-time situation awareness. Between this training program and other research projects, Professor Yang and his students aim at charting the path for effective and meaningful uses of LLMs for cyber-ops.