top of page

Sophisticated machine learning models are vulnerable to adversarial attacks, a growing risk with the ever-increasing number of systems going into operation. Adversarial machine learning improves the robustness of models and protects against threats that could disrupt financial or defense ecosystems. Machine learning algorithms rely on massive datasets to learn and perform tasks. As machine learning algorithms increasingly find their way into our roads, finances, and healthcare systems, sophisticated attackers have strong incentives to manipulate results and models to achieve their objectives. Adversarial machine learning aims to enable the safe adoption of machine learning techniques such as spam filtering, malware detection, cybersecurity, and biometric recognition. Adversarial attacks, a form of evasion attacks which have been getting a lot of publicity lately, can steal the identity of someone by wearing special glasses, mislead a self-driving car by altering traffic signs, disguise a weapon to avoid video detection, bypass fingerprint identification, or add an imperceptible amount of noise to trick speech-recognition systems, among many others. Algorithms, by their nature, are only focused on a relatively small portion of our complicated multi-dimensional world and may have learned to distinguish a stop sign from another sign using features of the image that aren't so obvious to humans. An attacker is essentially creating an optical illusion for the AI. Adversarial machine learning augments the original model by adding an adversarial model component, explicitly introduced to trick the original model into making improper classifications. This system can be paired and trained together as one; the model learns to become more robust against attacks through internal competition, and evasion attacks would trick a previously trained model into misclassifying data that has been subtly modified from what the model expects. One of the primary aggressive attacks against machine learning is known as a poisoning attack and could be used to plan an adversarial machine learning technique. Poisoning attacks involve the insertion of malicious data that an attacker can make behave abnormally under certain situations upon deployment. Even minor deviations from the norm could disrupt ecosystems like healthcare, finance, and even defense. Usually, this type of training would occur just once before the model is used in production. However, there are some situations in which the nature of the data changes over time, and a model would need to be continuously trained with new data. For example, spam changes over time as spammers come up with new ideas and change their approaches in response to detection mechanisms. Furthermore, for most organizations, training machine learning models in-house is too expensive. Therefore, they rely on publicly available pre-trained models. Malefactors can hack a server that stores public models and subsequently upload their own model with a backdoor that could remain open despite retraining. In order to mitigate the risk of dataset poisoning, organizations need to fully trust their third-party pre-trained models. Even if the provider of the model is legitimate, the whole process of data acquisition would need to be audited since attackers could influence the individual data points used to train the models. Further complicating the issue, sophisticated machine learning models are essentially black boxes; their decision-making processes are not yet fully understood. A possible solution could involve sourcing machine learning models from different providers, running them as an ensemble, and comparing their results to detect potential outliers and identify poisoning. Additionally, a machine learning system could include an integrated component which uses adversarial learning to audit and properly verify generated output against an adapting baseline. We could be facing an era in which AI-based cyber attacks could become the norm. In essence, we will use AI to train and secure our system to avoid being compromised by another AI. Today, machine learning algorithms mainly run in data-centers and on cloud-computing architectures. The trend towards machine learning as-a-service and further decentralization could lead to complex distributed machine learning models with decision-making through ensembles and redundant nodes. Designers and engineers can utilize the blockchain to secure and audit the authenticity of data sets, models, and individual predictions. The very nature of a redundant, decentralized system makes launching an attack significantly more difficult as the attack surface becomes astronomically larger.

Adversarial Machine Learning

KEY TRENDS

Machine Learning