Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon

New cybersecurity threats posed by artificial intelligence

Save for later
  • 6 min read
  • 05 Sep 2018

article-image

In 2017, the cybersecurity firm Darktrace reported a novel attack that used machine learning to observe and learn normal user behavior patterns inside a network. The malignant software began to mimic normal behavior thus blending it into the background and become difficult for security tools to spot.

Many organizations are exploring the use of AI and machine learning to secure their systems against malware or cyber attacks. However, given their nature for self-learning, these AI systems have now reached a level where they can be trained to be a threat to systems i.e., go on the offensive.

This brings us to a point where we should be aware of different threats that AI poses on cybersecurity and how we should be careful while dealing with it.

What cybersecurity threats does AI pose?

Hackers use AI as an effective weapon to intrude into organizations


AI not only helps in defending against cyber attacks but can also facilitate cyber attacks. These AI-powered attacks can even bypass traditional means of countering attacks. Steve Grobman, chief technology officer at McAfee said, “AI, unfortunately, gives attackers the tools to get a much greater return on their investment.”

A simple example where hackers are using AI to launch an attack is via spear phishing. AI systems with the help of machine learning models can easily mimic humans by crafting convincing fake messages. Using this art, hackers can use them to carry out increased phish attacks. Attackers can also use AI to create a malware for fooling sandboxes or programs that try to spot rogue code before it is deployed in companies' systems

Machine learning poisoning


Attackers can learn how the machine learning workflow processes function and once they spot any vulnerability, they can try to confuse these ML models. This is known as Machine learning poisoning. This process is simple. The attacker just needs to poison the data pool from which the algorithm is learning.

Till date, we have trusted CNNs in areas such as image recognition and classification. Autonomous vehicles too use CNNs to interpret the street designs. The CNNs depend on training resources (which can come from cloud or third parties) to effectively function. Attackers can poison these sources by setting up backdoor images or via a man-in-the-middle attack where the attacker intercepts the data sent to the Cloud GPU service.

Such cyber attacks are difficult to detect and can evade into the standard validation testing.

Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime

Bot cyber-criminals


We enjoy talking to chatbots without even realizing how much we are sharing with them. Also, chatbots can be programmed to keep up conversations with users in a way to sway them into revealing their personal or financial info, attachments and so on. A Facebook bot, in 2016, represented itself as a friend and tricked 10,000 Facebook users into installing a malware. Once the malware was compromised, it hijacked the victims’ Facebook account.

AI-enabled botnets can exhaust human resources via online portals and phone support. Most of us using AI conversational bots such as Google Assistant or Amazon’s Alexa do not realize how much they know about us. Being an IoT driven tech, they have the ability to always listen, even the private conversations happening around them. Moreover, some chatbots are ill-equipped for secure data transmissions such as HTTPS protocols or Transport Level Authentication (TLA) and can be easily used by cybercriminals.

Cybersecurity in the age of AI attacks


As machine driven cyber threats are ever evolving, policymakers should closely work with technical researchers to investigate, prevent, and mitigate potential malicious uses of AI.

Conducting deliberate red team exercises in the AI/cybersecurity domain similar to the DARPA Cyber Grand Challenge but across a wider range of attacks (e.g. including social engineering, and vulnerability exploitation beyond memory attacks). This will help to better understand the skill levels required to carry out certain attacks and defenses and to understand how well they work in practice.

Disclosing AI zero-day vulnerabilities: These software vulnerabilities are the ones that have not been made publicly known (and thus defenders have zero days to prepare for an attack making use of them). It is good to disclose these vulnerabilities to affected parties before publishing widely about them, in order to provide an opportunity for a patch to be developed.

Testing security tools: Software development and deployment tools have evolved to include an increasing array of security-related capabilities (testing, fuzzing, anomaly detection, etc.). Researchers can envision tools to test and improve the security of AI components and systems integrated with AI components during development and deployment so that they are less amenable to attack.

Use of central access licensing model: This model has been adopted in the industry for AI-based services such as sentiment analysis and image recognition. It can also place limits on the malicious use of the underlying AI technologies. For instance, it can impose limitations on the speed of use, and prevent some large-scale harmful applications. It also contains certain terms and conditions that can explicitly prohibit the malicious use, thus allowing clear legal recourse.

Using Deep Machine learning systems to detect patterns of abnormal activity. By using these patterns, AI and Machine learning can be trained to track information and deliver predictive analysis. Self- learning AI systems or reinforcement learning systems can be used to learn the behavioral pattern of the opponent AI systems and adapt themselves in a way to combat malicious intrusion.

Transfer learning can be applied to any new AI system which is to be trained to defend against AI. Here, the system can be used to detect novel cyber attacks by training it on the knowledge or data obtained from other labelled and unlabelled data sets, which contain different types of attacks and feed the representation to a supervised classifier.

Conclusion


AI is being used by hackers on a large scale and can soon turn unstoppable given its potential for finding patterns, a key to finding systemic vulnerabilities. Cybersecurity is such a domain where the availability of data is vast; be it personal, financial, or public data, all of which is easily accessible. Hackers find ways and means to obtain this information secretly. This threat can quickly escalate as an advanced AI can easily educate itself, learn the ways adopted by hackers and can, in turn, come back with a much devastating way of hacking.

Skepticism welcomes Germany’s DARPA-like cybersecurity agency – The federal agency tasked with creating cutting-edge defense technology

6 artificial intelligence cybersecurity tools you need to know

Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy