Synergistic Architecture for Human-Machine Intrusion Detection


Posted: January 26, 2017 | By: Dr. Noam Ben-Asher, Paul Yu


Currently, cyber attackers have an asymmetrical advantage over defenders. This unfavorable and vulnerable position calls for robust and efficient intrusion detection mechanisms. While current detection workflow involves human defenders, and rely on their analytical capabilities, we argue that in order to improve detection and protect networks against sophisticated attacks there is a need for a non-linear and interactive analyst-in-the-loop approach. This approach posits that cyber defenders should have means to interact with and exert influence on each and every component of the detection processes. Furthermore, we posit that the role of the analyst is to lead and supervise automated detection processes, resolve ambiguity and provide contextual mission relevant information rather than handling large amounts of information and weeding out false alerts. Situating the defender as the controller of the detection process instead of a handler of alerts allows the defender to direct analytical capabilities to the tasks where their contribution has the maximal impact. Efficient allocation of the defender analytical capabilities improves the detection accuracy and speed. This study depicts an analyst in-the-loop detection framework and provides a description of the types of required interactions between the evidence collection, inference engine, and the analyst. The use of queries and operations to improve detection is demonstrated and establishes the foundations for more detailed operational definitions of the interactions.


  1. N. Ben-Asher and C. Gonzalez, “Effects of cyber security knowledge on attack detection,” Computers in Human Behavior, vol. 48, pp. 51–61, 2015.
  2. A. DAmico and K. Whitley, “The real work of computer network defense analysts,” in VizSEC 2007, pp. 19–37, Springer, 2008.
  3. C. Gonzalez, N. Ben-Asher, A. Oltramari, and C. Lebiere, “Cognition and technology,” in Cyber Defense and Situational Awareness, pp. 93–117, Springer, 2014.
  4. C. Zhong, J. Yen, P. Liu, and R. F. Erbacher, “Automate cybersecurity data triage by leveraging human analysts’ cognitive process,” in 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity), IEEE International Conference on High Performance and Smart Computing (HPSC), and IEEE International Conference on Intelligent Data and Security (IDS), pp. 357–363, IEEE, 2016.
  5. O. S. Saydjari, “Cyber defense: art to science,” Communications of the ACM, vol. 47, no. 3, pp. 52–57, 2004.
  6. N. Virvilis, D. Gritzalis, and T. Apostolopoulos, “Trusted computing vs. advanced persistent threats: Can a defender win this game?,” in Ubiquitous Intelligence and Computing, 2013 IEEE 10th International Conference on and 10th International Conference on Autonomic and Trusted Computing (UIC/ATC), pp. 396–403, IEEE, 2013.
  7. K. Ehrlich, S. E. Kirk, J. Patterson, J. C. Rasmussen, S. I. Ross, and D. M. Gruen, “Taking advice from intelligent systems: the double-edged sword of explanations,” in Proceedings of the 16th international conference on Intelligent user interfaces, pp. 125–134, ACM, 2011.
  8. J. D. Lee and K. A. See, “Trust in automation: Designing for appropriate reliance,” Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 46, no. 1, pp. 50–80, 2004.
  9. R. Mortier, H. Haddadi, T. Henderson, D. McAuley, and J. Crowcroft, “Human-data interaction: the human face of the data-driven society,” Available at SSRN 2508051, 2014.
  10. R. Parasuraman, T. B. Sheridan, and C. D. Wickens, “A model for types and levels of human interaction with automation,” IEEE Transactions on systems, man, and cybernetics-Part A: Systems and Humans, vol. 30, no. 3, pp. 286–297, 2000.
  11. W. A. Arbaugh, W. L. Fithen, and J. McHugh, “Windows of vulnerability: A case study analysis,” Computer, vol. 33, no. 12, pp. 52–59, 2000.
  12. L. Bilge and T. Dumitras, “Before we knew it: an empirical study of zero-day attacks in the real world,” in Proceedings of the 2012 ACM conference on Computer and communications security, pp. 833–844, ACM, 2012.
  13. K. Veeramachaneni, I. Arnaldo, V. Korrapati, C. Bassias, and K. Li, “Ai2 Training a big data machine to defend,” in 2016 IEEE International Conference on Intelligent Data and Security, pp. 49–54, April 2016.

Want to find out more about this topic?

Request a FREE Technical Inquiry!