Small stickers on the ground trick Tesla autopilot into opposing traffic lane.
05/05/2019
Do we understand AI and do we trust AI? These were the two basic questions raised by Katherine Jarmul during her presentation at the Being Human with Algorithms symposium in Heidelberg last year. Her answer was we should be extremely careful, because we are still in an early stage before machine learning systems may reliably cope with the real world. There are patterns which could be easily recognized by humans but still pose a problem to AI systems. The literature is full with adversarial learning examples where turtles were recognized as guns, and little patches on stop signs where interpreted as go-faster signs. The latest example is a recent attack to the auto-steering system of a Tesla Model S 75.
Researchers from Tencent Keen Security Lab have published a report detailing their successful attacks on Tesla firmware, including remote control over the steering, and an adversarial example attack on the autopilot that confuses the car into driving into the oncoming traffic lane.
The researchers used an attack chain that they disclosed to Tesla, and which Tesla now claims has been eliminated with recent patches.
To effect the remote steering attack, the researchers had to bypass several redundant layers of protection, but having done this, they were able to write an app that would let them connect a video-game controller to a mobile device and then steer a target vehicle, overriding the actual steering wheel in the car as well as the autopilot systems. This attack has some limitations: while a car in Park or traveling at high speed on Cruise Control can be taken over completely, a car that has recently shifted from R to D can only be remote controlled at speeds up to 8km/h.
Tesla vehicles use a variety of neural networks for autopilot and other functions (such as detecting rain on the windscreen and switching on the wipers); the researchers were able to use adversarial examples (small, mostly human-imperceptible changes that cause machine learning systems to make gross, out-of-proportion errors) to attack these.
Most dramatically, the researchers attacked the autopilot’s lane-detection systems. By adding noise to lane-markings, they were able to fool the autopilot into losing the lanes altogether, however, the patches they had to apply to the lane-markings would not be hard for humans to spot.
Much more seriously, they were able to use “small stickers” on the ground to effect a “fake lane attack” that fooled the autopilot into steering into the opposite lanes where oncoming traffic would be moving. This worked even when the targeted vehicle was operating in daylight without snow, dust or other interference.