Artificial Intelligence: The Next “Nuclear Arms Race” – “Space Race to the Edge”

Figuring out where and how Artificial Intelligence (AI) and its various sub-types (Machine Learning, Deep Learning, etc.) fit into our world as we move into the future is difficult.

In some cases it seems straightforward; AI/ML speech recognition is astoundingly good and can be applied across many domains in a meaningful way. Mostly successful demonstrations of autonomous vehicles of all types portend many possible good implementations that are arguably “better” than how we do things now. For the military, using capabilities that are AI-enabled has the potential to keep personnel safe, reduce casualties, and improve mission success rates. Most all of them have to do with the combination of AI and lots of Data, so as to make sure (train, bound, qualify, quantify) they perform as intended. And that combination (AI and Data) is where the difficulty tends to migrate. We are far from the simplistic view of putting tons of data into the hopper, pushing the big red “AI” button, and turning the crank to get the results we want. The absolute truth today is that successful implementation of AI depends primarily on the expertise of people who know how to curate data, tune algorithms, and understand the intent/domain to build goal scenarios. Then, through large numbers of iterations over time, the results of using AI in controlled situations are reviewed, further tweaked and tuned, and pondered as to why “that just doesn’t look right” (TJDLR) – a definitively human operation, at least as of now. Deep Learning (DL), one of the most promising and prominent areas of AI research today, is not immune to this combination. For all its promise, DL’s heavy dependence on large amounts of pertinent (the truly hard part) data can cause it to react in very unpredictable (from a human perspective) ways.

To get an understanding of some of the thought that goes into “getting to” AI through domain experts, this edition of the Journal highlights three very different views of complex situations where AI might, should, and does intersect with our ability to use AI effectively.

The first article is focused on the impact of quantum computing and cryptography, with a reference to the role that machine learning might play in the future of post-quantum cryptography. This is another possible future intersection between AI and Data that will need domain expertise (human-centric, certainly at first) to determine what kinds of algorithms need to be applied, and what kind of data needs to be provided to move ahead.

The second article represents a view into the domain expertise necessary to include autonomy into a scenario in an effective way. This operationally-focused article highlights the importance of understanding the domain (essentially the frame of reference) which the AI/autonomy must be able to reason within. Even a straightforward scenario like the one provided shows the immense investment in understanding before it can be augmented with an AI capability.

The levels of expertise necessary to get to a successful full implementation of AI to reach a goal are many and varied. The third entry is a more illustrative step-through article showing a methodology of implementing an AI algorithm on a set of data to reach a goal. While more tutorial, it reveals the many steps involved in getting to an actual result. As frameworks evolve, the steps may be refined and made more streamlined, but they are still steps that must be understood before they can be automated.

Ultimately, that is one of the questions we have to ask of AI. How much of AI can be used to assist human activities, and how much can be used to replace human activities. With each level of automation/intelligence that we levy onto the AI “plate”, what are we gaining and what are we losing – and, can we understand the difference?

Want to find out more about this topic?

Request a FREE Technical Inquiry!