Artificial Intelligence-real world applications

Real Vs. Artificial Intelligence in Cybersecurity

Contributed

By Hamid Karimi

Cognitive dissonance in the cybersecurity space is prevalent; most of us are predisposed to subscribe to a given hypothesis and justify it through selective adoption of conforming data. Nowhere is this more applicable than the discussion surrounding the role of artificial or machine intelligence in creating the NexGen cybersecurity. The presumed role of artificial intelligence (AI) is so widely accepted that we seem to be getting ahead of ourselves by discussing ethical and moral questions of allowing machines to make decisions before we can measure AI’s efficacy in our day-to-day operations.

AI is a promising and transformative technology that transcends both vertical and horizontal deployment of data solutions. However there seems to be a wave of irrational exuberance taking over the world of cybersecurity stemming from unrealistic expectations of what AI can do. Such exuberance is amplified by powerful marketing machines making premature pronouncements of how AI can solve everything that digitally ails us.

AI Reality
There are vendors who claim functional AI today and do so in some cases by invoking a term rarely understood by end users; perhaps to avoid real technology scrutiny. In cybersecurity domain, to keep pace with the exponential complexity of threat landscape against available linear tools, we began with heuristics, followed with big data analytics based on machine-generated data and have ended up with AI. It is certainly time for a sanity check by taking a closer look at the promises and perils of AI in security.

Crunching raw data is the domain of computers, but making sense of complex and sometimes conflicting sets of data is the domain of humans. That is the gap between the promise of AI in cybersecurity and what makes practical sense. In the cybersecurity world, attackers historically have deployed more sophisticated tools than the defenders and the sheer number of soft targets makes it impossible to wage a truly effective defense on all fronts using the legacy models of cyber defense. For example, in the realm of anti-malware, polymorphism tells us that the attack variation, which presents a large degree of expected data deviation, renders the machine learning concept either impotent or at best only partially relevant.

AI or machine intelligence may be of some help here but to separate the two is a misconception; machine learning relies on self-aware algorithms that improve over time. This is where AI can be considered an extension of machine learning. In other words, AI is the part that can make sense of unstructured and unexpected machine data; on the other hand, machine learning in and of itself cannot solve the problems AI tackles. In reality, today’s security AI is nothing more than meta-analysis or metadata.

SecOps
The collection and analysis of big data on all aspects of application and network security offers a big promise for the use of AI. Potential attackers do not have access the trove of data that an IT team does and that creates an optimistic scenario under which defenders have an advantage using similar machine learning and AI tools. A number of security firms confidently claim that we have finally discovered a formula that puts us on the top and gain the upper hand against the bad guys. Finding and stopping bad guys has been the domain of SecOps. When it comes to security operations, new paradigms have emerged.

SecOps is becoming DevOps and that is both good and bad news. It is good because looking at security in isolation from the realities of product development and its impact on data center was a nonstarter to begin with. It is bad because the clash of cultures is inevitable; the slow and methodical DevOps will have to reconcile with the agile and paranoid SecOps to build a sustainable happy medium as the basis for creating resilient NexGen applications.

Data analytics is an inseparable component of operational workflow. The way data is ingested is a function of the team, which will consume it in the end and each functional unit inside the modern enterprise has a distinct set of goals and perspectives as to what entails the right analysis. IT and security teams look at machine data with great hesitation because they want to avoid rubber stamping both false positives and negatives; the former exposing the operations to security risks and the latter transforming SecOps into inadvertent denial of service tool.

Taking a cautious approach also renders the data both irrelevant and impracticable—the “so what?” notion. A relatively safe practice is validating assumptions through iterative sandboxing to make certain a perceived threat is a probable threat. Moreover, real-time threat intelligence can provide further credence to the output of such analytical systems. Large analytical platforms that use statistics must distinguish between normal and abnormal behavior while relying on large enough datasets to filter out anomalies. The biggest challenge facing AI is in identifying what is typical and atypical in the ever-changing context of digital computing. It is unrealistic to expect today’s AI to establish its own baseline, crunch through a large volume of data clusters, and label them with reasonable accuracy. Advanced Persistent Threats, for example, do not render themselves well to such approaches.

The process complexity in and of itself can inhibit wide scale adoptions. Whereas larger organizations have the luxury of hierarchical operational teams, small and mid-sized enterprises cannot afford building multiple parallel teams looking after IT security, compliance, and other matters. While some large enterprises are busy deploying bleeding edge tools to make incremental improvements in their SecOps, others adopt, in practice, a more pragmatic approach by isolating and insulating data through containers and microsegementation.

AI and Cybersecurity
When it comes to cybersecurity. let’s forget about AI for now. We can revisit that topic in a few years when we have empirical evidence of its operational applicability in cybersecurity. Computing machines are becoming smarter and over time will filter out most of the noise to give us pertinent and useful data. Presently what is relevant is honing our skills in properly analyzing large sets of endpoint and network data and then making the results fathomable to various stakeholders. SW

Hamid Karimi offers extensive knowledge on cybersecurity. For the past 15 years, his focus has centered on the security space, covering diverse areas of cryptography, strong authentication, vulnerability management, malware threats, as well as cloud and network protection. Karimi holds a Bachelor’s of Science degree in electrical and computer engineering from San Francisco State University. He is the VP of business development at Beyond Security, a provider for automated security testing solutions including vulnerability management, based out of Cupertino, CA.

Oct2019, AI-Applied (originally published in Software Magazine)

Comments are closed.