Cyber Security

Take threats against machine learning systems seriously, security firm warns

Organizations are increasingly using machine learning (ML) models in their applications and services without considering the security requirements they entail, a new study by security consultancy NCC Group shows.

Due to the unique ways that machine learning systems are developed and deployed, they introduce new threat vectors that developers are often unaware of, the study finds, adding that many of the old and known threats also apply to ML systems.

Uptick in use of machine learning

“We’ve seen a steady uptick since around 2015 in our customers deploying ML systems, and although there was a sizeable body of academic literature, there wasn’t much practical discussion of ML-specific security issues around back then,” Chris Anley, chief scientist at NCC Group and author of the study, told The Daily Swig.

Initially, Anley saw machine learning being deployed in very niche applications. But today, ML models are increasingly used in more general web areas, such as content recommendation or workflow optimization.

“We are now seeing chatbots used for customer support and other text-based applications like sentiment analysis and text classification becoming fairly popular – with all of the privacy and security implications that you’d expect,” said Anley.

Wide range of threats

One notable study in the field,Practical Attacks on Machine Learning Systems, provides an overarching view of the ML threat landscape in real-world applications.

It details some of the threats that are specific to machine learning models and their training and deployment pipeline:

  • Adversarial attacks: Input data is modified with human-imperceptible noise to change the behavior of the ML model.
  • Data poisoning and backdoor attacks: The training dataset is compromised and modified to make the trained ML model sensitive to specific triggers.
  • Membership inference attacks: Querying the ML model to determine whether a specific data point was used in its training set.
  • Model inversion attacks: Querying ML models to recreate their training data in part or whole.

While these kinds of threats have been thoroughly studied and documented by academic researchers, the NCC researchers focused on recreating them in practical settings where ML models were deployed in real-world applications such as user identity verification, healthcare systems, and image classification software.

Their findings show that carrying out attacks against ML systems in the real world is practically feasible.

“I think that it is fairly startling that there are dozens of papers describing exactly how those attacks work,” Anley said. “We’ve replicated a few of the results in those papers in ‘demo’ form, and we’ve successfully conducted simulated attacks on similar lines with customers. Although these privacy attacks aren’t as straightforward as, say, SQL injection-driven data breach[es], they’re certainly practical.”

The study also shows that ML systems are often vulnerable to malicious payloads embedded in machine learning models, vulnerabilities in the source code of machine learning libraries, security holes in machine learning pipelines, SQL injection attacks against web-hosted ML systems, and supply chain attacks against the dependencies used in machine learning software.

Complex data security landscape

“Data breaches are always a concern, and there are some fundamental aspects of ML that change the privacy risks,” Anley said.

First, ML systems perform better as the volume of data on which they are trained increases, so organizations potentially have to handle large volumes of sensitive information.

Second, trained models don’t have role-based access control – all training data is aggregated into the same model.

And third, experiments are a crucial part of ML development, so it’s important for large volumes of data to be accessible to developers.

“Securing ML systems can be difficult because of these issues, especially if the application handles sensitive data,” Anley said. “Developers often now have access to extremely powerful credentials, so it’s important to carefully consider who needs to do what, and restrict where you can, without impeding the business.”

Advertisement. Scroll to continue reading.

ML threats on the web

The emerging threats of ML systems have direct consequences for the web ecosystem, Anley warns.

“I think the main concern that’s emerging from the literature is that it’s possible to extract training data from a trained model, even when hosted on the web, behind an API server, and even under some fairly stringent conditions,” he said.

Various studies, including some that Anley and his colleagues reproduced in their research, show that information extraction attacks are feasible against ML systems that output only class labels, which is the way many web-hosted ML services work.

Of special concern are pre-trained ML models served on the web, which have become very popular in recent years. Developers who lack the skills or resources to train their own ML models can download pre-trained models from one of several web platforms and directly integrate them into their applications.

But pre-trained models can become the source of the threats and attacks that Anley discusses in his paper.

“Trained models themselves can often contain code, so they should also be carefully handled,” he explained. “Since training models is expensive, we’ve seen the emergence of ‘model zoos’, where pre-trained models are available. These obviously need to be handled with the same controls you’d apply to code.”

Secure development takeaways

We are still learning how to cope with the emerging threats posed by ML-powered applications. But in the meantime, Anley had some key recommendations to share with web developers who are jumping on the ML bandwagon:

  • “If your model is trained on sensitive data, consider refactoring your application so that you don’t need to train on sensitive data.”
  • “If you absolutely have to train on sensitive data, consider differential privacy techniques, anonymization or tokenization of the sensitive data.”
  • “Apply the same supply chain controls to external models, that you would to external code.”
  • “Carefully curate your training data and apply controls to ensure that it can’t be maliciously modified.”
  • Authenticate, rate limit, and audit access to models. If your model makes sensitive decisions that could be affected by adversarial perturbation, consider taking advice around implementing a training method to make the model more resistant to these attacks.”

Source: https://portswigger.net/daily-swig/take-threats-against-machine-learning-systems-seriously-security-firm-warns

Click to comment

You May Also Like

Cyber Security

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has given federal agencies three weeks to secure Adobe ColdFusion servers on their networks against two...

Cyber Security

Businesses and developers are focusing more on the security of applications in their digital environment as cyber threats and data breaches continue escalating. The...

Cyber Security

HCL BigFix is an endpoint management platform that has the capability to automate discovery, management, and remediation. It can find and fix vulnerabilities on...

Cyber Security

The Environmental Protection Agency cited a lack of resources and the sheer volume of critical vulnerabilities as the reasons for its inability to patch...

Copyright © 2023 Newsworthy News | Global | Political | Local | All News | Website By: Top Search SEO

Exit mobile version