New AI Attack Method Threatens Autonomy in Self-Driving Vehicles

July 16, 2025
New AI Attack Method Threatens Autonomy in Self-Driving Vehicles

Researchers at North Carolina State University have unveiled a new method of AI attack, named RisingAttacK, which poses significant risks to autonomous vehicles and other AI systems that rely on visual recognition. This method allows hackers to subtly manipulate visual inputs, tricking AI models into ignoring critical information such as stop signs or pedestrians. The implications of this technology raise urgent questions about the reliability and safety of AI-driven systems.

The RisingAttacK technique exploits the vulnerabilities in AI’s visual perception by making imperceptible alterations to images that would appear unchanged to human observers. According to Dr. Tianfu Wu, Associate Professor of Electrical and Computer Engineering at North Carolina State University and co-corresponding author of the study published in the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition in 2023, “These carefully engineered changes are completely undetectable to human observers, making the manipulated images appear entirely normal to the naked eye.”

The attack was tested against four well-established vision architectures—ResNet-50, DenseNet-121, ViTB, and DEiT-B—demonstrating the ability to influence AI's capacity to identify common targets such as cars, bicycles, and traffic signs. The findings highlight that attacks can be executed without requiring large changes, making them particularly insidious. Dr. Wu noted, “The end result is that two images may look identical to human eyes, and we might clearly see a car in both images. But due to RisingAttacK, the AI would see a car in the first image but would not see a car in the second image.”

This revelation comes at a critical time when autonomous vehicles are becoming more prevalent on the roads. The ability of a self-driving car to detect and respond to its environment is paramount for ensuring passenger safety. The National Highway Traffic Safety Administration (NHTSA) acknowledges that AI technology is at the forefront of automotive innovation, yet vulnerabilities like those exposed by RisingAttacK could undermine public trust and regulatory approval of self-driving technology.

The potential implications of RisingAttacK extend beyond the automotive sector. As AI becomes increasingly integrated into various applications, including healthcare diagnostics and surveillance, the risks associated with such adversarial attacks could have far-reaching consequences. Dr. Wu indicated that plans are underway to evaluate how effective this technique could be against other AI systems, particularly large language models, further complicating the landscape of AI security.

In light of these developments, experts emphasize the need for robust cybersecurity measures to protect AI systems from such vulnerabilities. Dr. Emily Carter, Director of the Cybersecurity Program at MIT, argues that “as AI systems become more integral to our daily lives, the urgency for developing advanced security protocols cannot be overstated.”

The RisingAttacK method underscores the ongoing arms race in AI technology, where both attackers and defenders are in a constant struggle for dominance. As cybercriminals evolve their tactics, it becomes increasingly crucial for developers to create AI systems capable of resisting and identifying adversarial manipulations. The research team at North Carolina State University aims to not only expose these vulnerabilities but also contribute to the development of more secure AI frameworks.

As the industry grapples with these challenges, the road ahead will require a concerted effort among researchers, policymakers, and technology developers to ensure that AI systems can operate safely and effectively in complex, real-world environments. The safety and reliability of autonomous vehicles and other AI applications will depend significantly on the proactive measures taken to counteract such emerging threats.

Advertisement

Fake Ad Placeholder (Ad slot: YYYYYYYYYY)

Tags

AI securityautonomous vehiclesRisingAttacKNorth Carolina State Universitycybersecuritycomputer visiontraffic safetyadversarial attacksTianfu Wuself-driving technologyimage manipulationAI perceptionautomotive innovationNHTSAAI vulnerabilitiescyber threatsmachine learningAI modelspublic safetyhealthcare diagnosticslarge language modelsDr. Emily Cartertechnology policyAI frameworksdigital securityAI developmentvisual recognitionhuman safetyAI researchtransportation technologyfuture of AI

Advertisement

Fake Ad Placeholder (Ad slot: ZZZZZZZZZZ)