College of Computing News

MLsploit Tackles Machine Learning Security with a Cloud-based Platform

Machine Learning (ML) algorithms are pervasive in our daily lives and are the basis for everything from suggestions on streaming platforms to fraud detection services, yet recent research has found that they are highly vulnerable to attacks. These attacks come in many forms, including bypassing Android and Linux malware detection, and attacking deep learning models for image misclassification and objection detection. 

To patch these vulnerabilities and increase security for safety-critical applications, researchers at Georgia Tech and Intel have teamed up to create MLsploit. It is the first user-friendly, cloud-based framework that enables researchers and developers to rapidly evaluate and compare state-of-the-art adversarial attacks and defenses for ML models. 

What Does MLsploit Do?

MLsploit’s web interface is open-source and allows researchers to quickly perform experiments on attack and defense algorithms by easily adjusting their parameters. Once tests are finished, the user may store the results in the framework to serve as a growing database for future adversarial ML research to build on.

“MLsploit is unique in that it is a collection and repository in the specific space of adversarial ML,” said School of Computational Science and Engineering Ph.D. student Nilaksh Das, a primary student investigator of the project.

MLsploit researchers built the tool as the springboard for students and researchers in adversarial ML, deep learning practitioners in industry who want to perform in-depth experimentation on a new model before rolling it out for private or public use.

“Ultimately, our goal is for MLsploit to become a collection of all the literature in the adversarial ML space,” he said.

How Does MLsploit Work?

MLsploit was built to be modular so that users can easily integrate their own work into the framework. 

 

MLsploit provides the user the web-user interface and the back-end computation engine. Then, the user can upload their own modules or functions. Once these are created, they can be used in conjunction with the whole MLsploit framework.

The tool was developed at the Intel® Science & Technology Center for Adversary-Resilient Security Analytics (ISTC-ARSA) housed at Tech. The center specializes in identifying vulnerabilities of ML algorithms and developing new security approaches to improve the resilience of ML applications.The project represents a culmination of the last three years of research in the center. 

MLsploit was first presented at Black Hat Asia 2019 and will be presented again as a Project Showcase at the 25th ACM SIGKDD Conference on Knowledge Discovery and Data Mining.

An extended abstract and complete listing of co-authors for the paper can be found here.

Recent Stories