The Cyber Security Collaborative Research Alliance: Unifying Detection, Agility, and Risk in Mission-Oriented Cyber Decision Making

cyber collaborative

Posted: January 23, 2017 | By: Patrick McDaniel, Ananthram Swami

Agility

Agility refers to the context and operation-aware reconfiguration of the system or the operation autonomously or by the defender with respect to a potential attack or perceived risk. Such reconfigurations of environment or operation strategies are referred to as cyber-maneuvers. Maneuvers, often called moving target defenses, seek to continually alter the attack surface as perceived by an adversary. Within the CRA, the research effort focuses on developing models and algorithms that reason about the current state, the universe of potential security-compliant cyber maneuvers (i.e., “maneuver” in the space of hardware, software, network and system characteristics and topologies) and end-states, and how these maneuvers are affected by and impacts human users, defenders, and attackers. Building on recent advances in moving target defenses [21][22], we are exploring game-theoretic models that select maneuvers that mitigate adversarial actions on operation outcomes. Note that some maneuvers may be offensive (such as deception techniques) in that they launch counter-measures that impact would-be attackers.

Broadly speaking, in an agile operation environment the system state needs to be continuously analyzed based on detected threats, assessed risks and human feeds on operation evolution. Subsequently, the system must be reconfigured towards: (i) preventing and mitigating attacks, thereby maximizing outcome utility in our operation model; (ii) completing the operation in a secure and resource-optimal way given the current state and the dynamics of the end state; (iii) minimizing risk and accounting for deception; and (iv) integrating the human factors that impact the cyber-security operations. An adversary’s perception of the attack surface can be altered by maneuvers in different layers, e.g., software, network, and system layers. Not surprisingly, one can also formulate agility problems in a game-theoretic setting.

Our study of software maneuvers seeks to develop the science of software agility. The objective of software agility is to pick the optimal tasks to execute, and the optimal software configuration in which these will execute, given desired security outcomes, risks, and current state of the system (e.g., attacks, defenses, including psychosocial factors). We achieve this objective by (1) using a proactive approach to software agility to withstand and thwart attacks, and (2) continuously analyzing the systems software state and if/when needed, performing software reconfiguration, based on detected threats, assessed risks and human feeds on operation evolution. Our early efforts were focused on reactive approaches for reconfiguring a key-value server and mobile apps, and moved on to study of proactive reconfiguration, cost/payout metrics, and approaches beyond smartphones and key-value servers. The cost/benefit analysis balances security, capability, availability and resource consumption. The Agility team has made advances in several directions, such as the quantification of the cost of reconfigurations [28], theory and practice of cyber-maneuvering [38], and characterizing root-provider attacks [41]. Over the next two years we will generalize to more powerful models of maneuver in a formal quasimetric space, reconfiguration, and cost; and formal guarantees of attack resistance. Agility mechanisms are one form of deception, and a formal approach to this, including psychosocial metrics of deception, warrant study. We proposed software “wrappers” as a flexible mechanism for dynamically changing programs and runtime environments and have used it for changing data structures on-the-fly in server-side processing [27], changing the OS state [38], and bytecode rewriting to survive faults [1]. Recent work in the CRA is developing a unified approach to encoding configurations, and formal mechanisms for controlling transitions. A related validation study is analyzing existing and new side channels (e.g., TCP stacks, [6]) to understand the limitations of software randomization strategies.

A key issue in game-theoretic approaches is to determine the appropriate models of interactions between the defender and attacker. While it is conceivable that the two may choose their strategies simultaneously, it is more likely that each of them will choose their strategies in response to the “observable actions” by the other. The tradeoff between leading/following depends on the specific payoff functions as well as the penalty of delaying a player’s action (e.g., missing an attack opportunity). In this, we are exploring various dynamic game formulations, with different leader/follower roles for the attacker and defender. For example, the defender may lead the game by invoking his/her proactive security measures. The attacker will then respond with his/her own actions. The roles can be dynamically switched, depending on each player’s payoff (e.g., shortly after taking an action, the defender may decide to take a subsequent action without waiting for the attacker action; this decision may be triggered by more updated statistical analysis of adversarial responses). Such dynamism enable us to capture the bounded regimes of rationality of human adversaries. Other recent work on game theoretic approaches include models for stealthy attacks, involving two-player differential games, and asymmetric versions of the FlipIT game where the feedback may be delayed. We have characterized best response strategies [13][14]. Our three-player game models build on this, including now a third player – the insider – who may be helpful or harmful. We have characterized Nash equilibria in this three-player sequential game.

Recent research on the psychology of decision making seeks to understand how humans make decisions from experience (DFE) rather than descriptions. Such an approach enables one to relax assumptions of rationality. PIs in the team have championed the development and use of Instance-Based Learning (IBL) models that do not need predefined implementations of interaction strategies [16], [15]. IBL can be integrated with automated tools and models of risk assessment in cyber security, e.g., [8], as recently demonstrated in [2]. Our current work addresses key challenges related to scalability with multiple players, and cognitive biases and judgment impairments (such as due to memory and recall limitations), and how attack and defense strategies evolve in repeated games, across multiple attack patterns. Central to most game theoretic assumptions are assumptions of information certainty and human rationality (which includes, for example, ability to perfectly recall all relevant information). Assumptions of rational behavior on part of the attacker may lead to poorly performing strategies against a myopic attacker; and assumptions of rational defenders may lead to defense mechanisms that are never realized in practice. We are augmenting our game theoretic approaches with IBL to model humans with bounded rationality. Psychological research suggests that risk variability in humans may be explained and predicted by cultural and other cognitive factors; and such factors have been observed in the cyber domain to gain insights into attackers [23][37]. Our current focus is on incorporating such personality and cultural factors, for individuals and groups, into behavioral models such as IBL and game theoretical approaches to account for individual variability and biases [13]. We are further incorporating tools to enable cutting edge analysis of individual decision-making [24] and exploring how system prompts and presentation effect security outcomes [40].

Discussion & Conclusions

We have introduced a conceptual framework and research agenda for reasoning about cyber-maneuvers in military environments. This model jointly reasons about situational awareness, risk assessment, and software, system and network agility to support ongoing cyber-operations. These factors are integrated into a unified operational model that defenders and automated systems can use to make “optimal” decisions about how to achieve mission goals and mitigate the activities of adversaries.

The inter-dependencies between the elements of risk, detection and agility are obvious. Resources spent on detection (how many monitors, how many samples, choices of algorithms) are dictated by assessment of threat and risk, and in turn feed into risk assessments. The outputs of detection (including our confidence in such detection outputs) provide inputs for agility maneuvers; in turn, agility decisions feed information about network configurations to detection strategies. Agility algorithms depend on the detection of potential attacks, the risks associated with the perceived attacks, the desired responses by the defenders and attackers, the perceived risk in transitioning from the current to a desired state; and accounting for human dynamics. Risk feeds both detection and agility; for both, it shapes the goals and focus of the algorithms. Thus, as is evident from our operation model, the goals of the models and algorithms for agility are integrally dependent on the risk, detection, and human dynamics.

Experimental verification and validation have been and continue to be key components of CRA research. While all algorithms are typically tested on synthetic / simulated date, we make extensive use of the cyber experimentation testbed called Cyber Virtual Ad hoc Network (CyberVAN) [10].

The research towards reaching this vision is just beginning its fourth year, but we have already made great strides in analyzing target environments and developing preliminary models. Our current focus is to bring together these disparate but complementary models into a comprehensive framework, and to measure its effectiveness in realistic military contexts. These experiments will assess the accuracy and sensitivities of decision making process and provide guidance into its refinement.

References

  1. K.B. Alexander. Warfighting in cyberspace. Joint Force Quarterly, Issue 46, July 2007. .
  2. T. Azim, I. Neamtiu, and L. Marvel. Towards self-healing smartphone software via automated patching. Proc. 29th IEEE/ACM International Conference on Automated Software Engineering (New ideas track), ASE 2014, September 2014
  3. N. Ben-Asher, A. Oltramari, R. Erbacher, C. Gonzalez. Ontology-based Adaptive Systems of Cyber Defense. The 10th International Conference on Semantic Technology for Intelligence, Defense, and Security (STIDS) 201
  4. D.P. Bertsekas. Dynamic programming and optimal control. Athena Scientific, 2005.
  5. D.P. Bertsekas and S.E Shreve. Stochastic optimal control: The discrete time case. Academic Press, 2007.
  6. M. Cains, D. Henshel, B. Hoffman, C. Sample. Integrating Cultural Factors into Human Factors Framework for Cyber Attackers. Proc. 7th Intl. Conf. Applied Human Factors and Ergonomics (AHFE), 2016
  7. Y. Cao, Z. Qian, Z. Wang, T. Dao, S.V. Krishnamurthy, L. M. Marvel. Off-Path TCP Exploits: Global Rate Limit Considered Dangerous (CVE-2016-5696). Proc. USENIX SECURITY 2016, Austin, TX, 2016
  8. Z. B. Celik, N. Hu, Y. Li, N. Papernot, P. McDaniel, J. Rowe, R. Walls, K. Levitt, N. Bartolini, T.F. La Porta, and R. Chadha. Mapping Sample Scenarios to Operational Models.  Proceedings of the IEEE Military Communications Conference (MILCOM), Nov 2016, Baltimore, MD.
  9. J.-H. Cho, H. Cam, A. Oltramari. Effect of personality traits on trust and risk to phishing vulnerability: Modeling and analysis. Proc. IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA’2016), March 21-25, 2016, San Diego
  10. Cyber-Security Collaborative Research Alliance, Webpage, August 2016,
  11. Cyber virtual ad hoc network (CyberVan). http://www.appcomsci.com/research/tools/cybervan [Online; accessed 5-September-2016].
  12. G. Deckard, L.J. Camp. Measuring efficacy of a classroom training week for a military cybersecurity training exercise. Proc. IEEE International Conference on Technologies for Homeland Security, (Waltham, MA) 10-16 May 2016.
  13. K. Durkota, V. Lisy, B. Bosansky, and C. Kiekintveld. Optimal network security hardening using attack graph games. Proc. IJCAI, 2015.
  14. X, Feng, Z. Zheng, D. Cansever, A. Swami, and P. Mohapatra. Stealthy Attacks with Insider Information: A Game Theoretic Model with Asymmetric Feedback. Proc. IEEE MILCOM 2016, Baltimore, MD, Nov 2016.
  15. X, Feng, Z. Zheng, P. Hu, D. Cansever, and P. Mohapatra. Stealthy Attacks Meets Insider Threats: A Three-Player Game Model. Proc. IEEE MILCOM 2015, Tampa, FL, Oct 2015.
  16. C. Gonzalez, N. Ben-Asher, J. Martin, V. Dutt. A cognitive model of dynamic cooperation with varied interdependency information. Cognitive Science, 39, 457-495, 2015.
  17. C. Gonzalez, N. Ben-Asher, A. Oltramari, C. Lebiere. Cognition and technology. In Cyber Defense and Situational Awareness, pp. 93-117. Springer International Publishing, 2014
  18. D. Henshel, A. Alexeev, M. Cains, B. Hoffman, I. Neamtiu, J. Rowe. Modeling cybersecurity risks: Proof of concept of a holistic approach for integrated risk quantification. Proc. IEEE Intl. Symp. Technologies for Homeland Security (HST), 2016
  19. D. Henshel, M. Cains, B. Hoffman, T. Kelley. Trust as a human factor in cyber security risk assessment. Proc. 6th Intl. Conf. Applied Human Factors and Ergonomics (AHFE), July 2015
  20. D. Henshel, G. Deckard, B. Lufkin, N. Buchler, B. Hoffman, L. Marvel, S. Cannello, and P. Rajivan. Predicting Proficiency in Cyber Defense Team Exercises. Submitted to Military Communications Conference, MILCOM 2016-2017 IEEE, IEEE 2016
  21. C. Jackson, R. Erbacher, S. Krishnamurthy, K. Levitt, L. Marvel, J. Rowe, A. Swami. A Diagnosis-based Approach to Intrusion Detection. 20th European Symposium on Research in Computer Security (ESORICS 2015), Vienna, Austria.
  22. S. Jajodia, A.K. Ghosh, V. Swarup, C. Wang, and X.S Wang, Eds. Moving Target Defense: creating asymmetric uncertainty for cyber threats, volume 54. Springer Science & Business Media, 2011
  23. S. Jajodia, A.K. Ghosh, V.S Subrahmanian, V. Swarup, C. Wang, and X.S. Wang, Eds. Moving Target Defense II: Application of Game Theory and Adversarial Modeling. Springer Science & Business Media, 2013.
  24. D.N. Jones and D.L. Paulhus. Introducing the short dark triad (sd3): A brief measure of dark personality traits. Assessment, 21(1):28{41, 2014.
  25. T. Kelley, B. Bertenthal. Attention and past behavior, not security knowledge, modulate users’ decisions to login to insecure websites. Information and Computer Security, 24(2), 2016
  26. R.A. Kemmerer, and G. Vigna. Intrusion Detection: A Brief History and Overview. IEEE Security & Privacy, 2002.
  27. K. Khalil, Z. Qian, P. Yu, S. Krishnamurthy, A. Swami, Optimal Monitor Placement for Detection of Persistent Threats. IEEE Globecom, Washington DC. 4-8 Dec 2016.
  28. A. Kusum, I. Neamtiu, and R. Gupta. Adapting graph application performance via alternate data structure representation. Proc. 5th International Workshop on Adaptive Self-tuning Computing Systems, 2015.
  29. L. Marvel, S. Brown, I. Neamtiu, R. Harang, D. Harman, and B. Henz. A framework to evaluate cyber agility. Proc. IEEE MILCOM 2015, Tampa, FL, Oct 2015.
  30. P. McDaniel, T. Jaeger, T.F. La Porta, N. Papernot, R. Walls, A. Kott, I. Neamtiu, L. Marvel, A. Swami, P. Mohapatra, S. Krishnamurthy. Security and Science of Agility. Proceedings of the First ACM Workshop on Moving Target Defense, 2014
  31. P. McDaniel, N. Papernot, and Z.B. Celik. Machine learning in adversarial settings. IEEE Security & Privacy, 2016.
  32. P. McDaniel, B. Rivera, and A. Swami, Toward a Science of Secure Environments. IEEE Security & Privacy Magazine, 12(5), July-August, 2014
  33. A. Oltramari, L.F. Cranor, R. Walls, and P. McDaniel. Computational Ontology of Network Operations. Proceedings of the IEEE Military Communications Conference (MILCOM), October 2015. Tampa, FL.
  34. A. Oltramari, D. Henshel, M. Cains, B. Hoffman. Towards a Human Factors Ontology for Cyber Security. Proc. Semantic Technology for Intelligence, Defense, and Security (STIDS), 2015.
  35. N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z.B. Celik, and A. Swami. Practical black-box attacks against deep learning systems using adversarial examples. arXiv preprint arXiv:1602.02697, 2016.
  36. N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z.B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. IEEE European Security and Privacy Symposium, Mar 2016.
  37. N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami. Distillation as a defense to adversarial perturbations against deep neural networks. IEEE Security and Privacy Symposium, May 2016.
  38. C. Sample. Cyber + Culture Early Warning Study. CMU/SEI-2015–SR-025, 2015, Online at: http://resources.sei.cmu.edu/asset_files/SpecialReport/2015_003_001_449739.pdf
  39. Z. Shan, I. Neamtiu, Z. Qian, and D. Torrieri. Proactive restart as cyber maneuver for android. Proc. IEEE MILCOM 2015, Tampa, FL, Oct 2015.
  40. R. Shay, L. Bauer, N. Christin, L.F. Cranor, A. Forget, S. Komanduri, M.L. Mazurek, W. Melicher, S.M. Segreti, and B. Ur. A Spoonful of Sugar?: The Impact of Guidance and Feedback on Password-Creation Behavior. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 2903-2912. ACM, 2015.
  41. C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus. Intriguing properties of neural networks. Proceedings of the 2014 International Conference on Learning Representations. Computational and Biological Learning Society, 2014.
  42. V. Vapnik and R. Izmailov. Learning using privileged information: Similarity control and knowledge transfer. Journal of Machine Learning Research, pp. 2023-2049, 2015
  43. H. Zhang, D. She, and Z. Qian. Android root and its providers: A double-edged sword. Proc. 22nd ACM SIGSAC Conference on Computer and Communications Security, CCS ’15, pp 1093–1104, 2015
  44. B. Zhou, I. Neamtiu, R. Gupta. How Do Bug Characteristics Differ Across Severity Classes: A Multi-platform Study. Proc.26th IEEE International Symposium on Software Reliability Engineering, November 2015

Want to find out more about this topic?

Request a FREE Technical Inquiry!