LSU Computer Science Professor Leading Project to Increase Security in Federated Learning

Hao Wang and a studentSeptember 13, 2023

BATON ROUGE, LA – Federated learning is a technique that has gained attention for its potential to improve privacy, security, and efficiency in various sectors. At times, in order to improve the quality and robustness of this technique, it is subject to periods of “critical learning.” However, it is during these periods that outside agents have an opportunity to launch precise and damaging attacks.

In order to better understand these opportunities and attacks, LSU Computer Science Assistant Professor Hao Wang is working with Assistant Professor Jian Li, from the Department of Computer Science at Stony Brook University, and Associate Professor Xu Yuan, from the Department of Computer & Information Sciences at the University of Delaware. Their work is funded by a $500,000 National Science Foundation grant, and its goal is to deliver a prototype federated learning system with algorithms that detect critical learning periods and employ attack/defense methods.

“A critical learning period is an inherent property of the training process of deep-learning models; it could amplify a variety of attacks, including data-poisoning attacks and model-poisoning attacks,” Wang said. “In other words, if these attacks happen during the critical learning periods, they can bring much more damage to the [artificial intelligence] model.

“One example is backdoor attacks, which involve embedding a hidden pattern or trigger into the training data, such that the compromised model behaves normally for most inputs but produces incorrect or malicious output when the trigger is present. The attacker usually has control over both the trigger and the corresponding malicious output, allowing them to exploit the model for specific tasks without being easily detected.”

Some examples of the way federated learning is utilized in various industries are:

  • Fraud detection in fintech, or financial technology – Banks and financial institutions use federated learning to build more robust fraud detection models by learning from a wide array of decentralized data points without compromising user privacy.
  • Disease prediction and prevention in healthcare – Medical institutions use federated learning to collaborate and predict disease outbreaks or patient outcomes without sharing sensitive patient data.
  • Autonomous vehicles – Car manufacturers use federated learning to improve the safety and efficiency of autonomous driving algorithms without sharing sensitive data belonging to drivers.

As Wang and his team’s research progresses, they will share datasets, models, algorithms, etc., with the AI and security community. This will allow their peer researchers to reproduce their experiments and gain insight into their work. Additionally, the team will involve students at the K-12, undergraduate, and graduate levels in their research.

“For undergraduate students, we plan to attract and involve them by providing research assistant positions and hosting robust AI workshops and hackathons,” Wang said. “Two students—Michelle Vo and Sejal Patel—joined my lab from LSU Women in Computer Science and worked on backdoor attacks to AI models. I am also discussing with the Society for Asian Scientists and Engineers about hosting a robust AI hackathon for undergraduates. We plan to release toolkits to participants to observe and exploit the critical learning periods of AI models.

“We will also involve K-12 students in our project. A few observations and discoveries from the project will be packaged into hands-on projects for students to explore AI and security.”

Like us on Facebook (@lsuengineering) or follow us on Twitter and Instagram (@lsuengineering).​


Contact: Joshua Duplechain
Director of Communications