According to Research by Willis Towers Watson in 2017 found that 90% of cyberattacks are due to human error or behavior. They also found that 20% of cyber breaches are driven by an external threat or extortion. What’s going to happen when Artificial intelligence (AI) and Machine Learning (ML) make computers capable of replicating human behavior to disguise malicious intent while learning about and exploiting flaws in our human-engineered security regimes in ways we can’t detect or block?
In the ongoing cybersecurity battle royal, AI and ML are the new mega-weapons. They must be embraced and wielded effectively by the guardians of the good because future “white hat” AI will engage in a battle of wits with AI that has gone to the dark side. Or more accurately, AI that is being exploited by hackers to execute smarter attacks and undetectable impersonations of true human behavior.
Evil Artifical Intelligence
The software innovations behind search engine technology, news feeds, and personal marketing profiling have created big data processing capabilities that are now available to anyone who wants to gather and sift through a vast trove of information to see what tidbits lie within. Artificial Intelligence algorithms have given computers the ability to learn from experience and get smarter on their own. They can try and fail and try again a million times in order to find the best way forward toward their defined goals. “Evil AI” can patiently study and learn about its target to discover weaknesses, inadequate safeguards and administration mistakes made by oh-so-imperfect humans. Then “evil AI” can formulate a customized, intelligent plan for exploiting these vulnerabilities.
Good Guy Artificial Intelligence
Our best security hope lies in “good guy AI” becoming smart enough to do more than simply counter the chess moves of the enemy. Good AI must identify suspicious activity patterns and spot slight deviations from normal behaviors that are the fingerprints of “dark AI” manipulations, impersonations, and camouflages. Ideally, good AI will proactively learn and discern new attack methodologies and preemptively block them before the threat develops. In a perfect world, artificially intelligent security systems will adapt and evolve to stay one step ahead of the bad guys. That’s a lot better than our current get-hacked-then-try-to-keep-it-from-happening-again approach.
AI tech can also help automate security tasks that currently suck up human staff resources, making companies more secure in spite of the ongoing global difficulty in hiring security expertise.
Steps toward launching a new generation of intelligent cybersecurity tech have been taken. For instance, Alphabet’s Google X spinoff has created a platform dubbed Chronicle that they describe as a “digital immune system” that uses massive data analysis to detect threats faster on a broader scale.
Cybersecurity Pros Need to Read Up on Artifical Intelligence
When it comes to AI- and ML-driven cybersecurity solutions, Jon Oltsik, a principal analyst at Enterprise Strategy Group, finds that thirty-nine percent of enterprise organizations have already deployed AI-based security analytics extensively or on a limited basis. Still, only thirty percent of cybersecurity pros claim to be very knowledgeable about AI/machine learning and its application to cybersecurity analytics.
That means there’s an impressive growth curve opportunity for AI and ML to revolutionize the very nature of the defensive arsenal available to cybersecurity teams. Let’s hope the good guys in this arms race can outsmart the bad guys—because we know the bad guys never stand still.