Researchers use lasers to trick autonomous cars and remove pedestrians from view
A collaboration of researchers from the U.S. and Japan has demonstrated that a laser attack could be used to blind autonomous cars and delete pedestrians from their view, endangering those in its path, according to a press release.
Autonomous or self-driving cars rely on a spinning type of radar system called LIDAR that helps the vehicle sense its surroundings. Short for Light Detection and Ranging, the system emits laser lights and then captures its reflections to determine the distances between itself and the obstacles in its path.
Most advanced autonomous cars today rely on this system to steer through obstacles in their path. However, the collaboration of researchers from the University of Florida, the University of Michigan, and the University of Electro-Communications in Japan showed the system can be tricked with a fairly basic laser setup.
How researchers tricked an autonomous car
The researchers used a laser to mimic the LIDAR reflections that the sensor usually receives. In the presence of the laser signals, the sensor discounted genuine reflections that were coming from the real obstacles, thereby perceiving them as if they did not exist.
Using this method, the researchers were able to delete data for static obstacles as well as moving pedestrians. When used against an autonomous vehicle in test conditions, the attack prevented the car from decelerating in the presence of the pedestrian, something it has been programmed to do.

The laser attack was conducted from the side of the road of an approaching vehicle, not more than 15 feet away from the attacker. The researchers also used only basic camera tracking software for their experiments and could be affected from a greater distance using more sophisticated equipment.
How can a laser attack be thwarted?
This is the first report of a LIDAR system being spoofed in any way to prevent it from detecting obstacles. There is a fair degree of accuracy needed to time the laser signal towards the LIDAR sensor to spoof; however, the data needed to synchronize this is available publicly from LIDAR manufacturers, one of the researchers associated with the study said in a press release.
The researchers carried out these tests to help build a more reliable system for sensors. Manufacturers of these systems could now make additions to their software to be able to detect instances of such an attack and switch to an alternate system of obstacle detection. Alternatively, the hardware could also be improved in the future to defend them from such attacks.
With the rise in autonomous car systems, an attack on a LIDAR system is a plausible way to confuse cars, especially when humans are no longer at the wheel. The result of such attacks could be catastrophic, and efforts must be made in the early stages to minimize or eliminate such vulnerabilities.
The research findings were published in the pre-print website arxiv.org and will also be presented at the 2023 USENIX Security Symposium.
Abstract
Autonomous Vehicles (AVs) increasingly use LiDAR-based object detection systems to perceive other vehicles and pedestrians on the road. While existing attacks on LiDAR-based autonomous driving architectures focus on lowering the confidence score of AV object detection models to induce obstacle misdetection, our research discovers how to leverage laser-based spoofing techniques to selectively remove the LiDAR point cloud data of genuine obstacles at the sensor level before being used as input to the AV perception. The ablation of this critical LiDAR information causes autonomous driving obstacle detectors to fail to identify and locate obstacles and, consequently, induces AVs to make dangerous automatic driving decisions. In this paper, we present a method invisible to the human eye that hides objects and deceives autonomous vehicles' obstacle detectors by exploiting inherent automatic transformation and filtering processes of LiDAR sensor data integrated with autonomous driving frameworks. We call such attacks Physical Removal Attacks (PRA), and we demonstrate their effectiveness against three popular AV obstacle detectors (Apollo, Autoware, PointPillars), and we achieve 45° attack capability. We evaluate the attack impact on three fusion models (Frustum-ConvNet, AVOD, and Integrated-Semantic Level Fusion) and the consequences on the driving decision using LGSVL, an industry-grade simulator. In our moving vehicle scenarios, we achieve a 92.7% success rate removing 90\% of a target obstacle's cloud points. Finally, we demonstrate the attack's success against two popular defenses against spoofing and object hiding attacks and discuss two enhanced defense strategies to mitigate our attack.