Researchers develop new method to protect AI from "poisoned" data attacks
PR Newswire
MIAMI, Aug. 13, 2025
MIAMI, Aug. 13, 2025 /PRNewswire/ -- From self-driving cars to power grid management, artificial intelligence systems are becoming increasingly embedded in everyday life. But a growing threat looms: data poisoning attacks that can sabotage these systems.
Data poisoning occurs when cyber attackers insert false or misleading information into the massive datasets used to train AI models, skewing their behavior in dangerous ways. Beyond causing a chatbot to produce gibberish, poisoned models could have real-world consequences, such as making an autonomous vehicle ignore stop signs or disrupting critical infrastructure systems.
FIU cybersecurity researchers have introduced an innovative solution that combines two emerging technologies – federated learning and blockchain – to detect and remove malicious data before it compromises AI models. The study was published in IEEE Transactions on Artificial Intelligence.
"We've built a method that can have many applications for critical infrastructure resilience, transportation cybersecurity, healthcare and more," said Hadi Amini, lead researcher and assistant professor in the FIU Knight Foundation School of Computing and Information Sciences.
The first part of the team's new approach involves federated learning, which allows AI models to train across multiple devices without centralizing sensitive data, reducing privacy risks. However, it remains vulnerable to poisoned updates.
"Verifying whether a user's data is honest or dishonest before it gets to the model is a challenge for federated learning," said Ervin Moore, a Ph.D. candidate in Amini's lab and lead author of the study.
Enter blockchain technology. Widely known for securing cryptocurrency, blockchain adds a tamper-proof layer of verification. FIU's solution uses blockchain to compare block updates, flag outliers and discard potential threats before they reach the global model.
The research team is advancing this work with partners at the National Center for Transportation Cybersecurity and Resiliency to integrate quantum encryption for even stronger protection of data and systems.
"Our goal is to ensure the safety and security of America's transportation infrastructure while harnessing the power of advanced AI to enhance transportation systems," said Amini.
The project has received support from the ADMIRE Center and the U.S. Department of Transportation's National Center for Transportation Cybersecurity and Resiliency.
To read more, please click here.
About FIU:
Florida International University is a Top 50, preeminent public research university with 55,000 students from all 50 states and more than 140 countries, as well as an alumni network of more than 340,000. Located in the global city of Miami, the university offers more than 200 degree programs at the undergraduate, graduate and professional levels, including medicine and law. FIU faculty are leaders in their fields and include National Academy members, Fulbright Scholars, and MacArthur Genius Fellows. A Carnegie R1 institution, FIU drives impactful research in environmental resilience, health, and technology and innovation. Home to the Wall of Wind and Institute of Environment, FIU stands at the forefront of discovery and innovation. With a focus on student success, economic mobility and community engagement, FIU is redefining what it means to be a public research university.
Media Contact:
Jonathan Ruadez
305-348-8448
jruadezn@fiu.edu
news.fiu.edu
@FIUNews
View original content to download multimedia:https://www.prnewswire.com/news-releases/researchers-develop-new-method-to-protect-ai-from-poisoned-data-attacks-302529266.html
SOURCE Florida International University
