Security is finally being taken seriously but the fact that we are increasingly entrusting our lives to self-driving cars creates unease

“Car companies are finally realising that what they sell is just a big computer you sit in,” says Kevin Tighe, a senior systems engineer at the security testing firm Bugcrowd.

It’s meant to be a reassuring statement: proof that the world’s major vehicle manufacturers are finally coming to terms with their responsibilities to customers, and taking the security of vehicles seriously.

But given where Tighe and I are talking, it’s hard not to be slightly uneasy about the idea that it’s normal to sit inside a massive computer and trust it with your life. We’re meeting at Defcon, the world’s largest hacking conference, just outside the “car-hacking village”, a recent addition to the convention’s lineup, where enthusiasts meet to trade tips on how to mess about with those same computers for fun and profit.

An autonomous self-driving vehicle picks up people during a demonstration in Singapore.
A self-driving vehicle picks up passengers during a demonstration in Singapore. Photograph: Edgar Su/Reuters

The village, one of a number of breakout areas (others include biohacking, lock picking and “social engineering” – the art and science of talking people into doing stuff they shouldn’t), was instituted last year. Also in 2015, two researchers, from the security consultancy IOActive and Twitter, turned car hacking from a vaguely theoretical pursuit into one with terrifying consequences.

At that year’s Defcon, Twitter’s Charlie Miller and IOActive’s Chris Valasek demonstrated they were able to wirelessly take over a Jeep. They used a laptop connected to the internet miles from the vehicle to seize control of it, cutting the brakes and transmission at the flick of a switch.

It sparked a worldwide recall for the affected cars – which included much of Fiat Chrysler’s range. It also exposed serious problems with how the car companies planned to handle such software flaws. Even though the hack could be executed remotely, it could only be fixed with physical access to the car, forcing Fiat Chrysler to post USB keys to affected owners, or ask them to bring their cars in for maintenance. Posting USB keys brought its own problems: plugging an untrusted USB key into anything, whether car or computer, carries serious risks. It’s also hard for anyone to easily verify that a drive received in the post is malware-free.

Some fixes were easier to carry out, though. Speaking at this year’s Black Hat conference in Las Vegas (think Defcon but in suits, taking place a few days earlier), Valasek and Miller – now both employed as researchers at Uber – revealed that one of the more effective changes Fiat Chrysler made was simply asking Sprint, the cellular provider that connected all the cars to the net, to block all incoming traffic.

“This made the vulnerability kind of go away,” Miller said, as Valasek pointed out that the cars never really needed the incoming connections in the first place. The service had just been kept open because no one had thought to turn it off.

That’s good, because if it was still open, the situation would be much worse today than it was last year. Although the Jeep hack was spectacular, it came with severe limitations. The pair had managed to use a bug in the car’s entertainment system, which was connected to the net, to tunnel through to the supposedly secure internal network, which the various components of the car use to talk to each other, called the Can bus.

But simply having access to the network didn’t mean they were able to seize control of the car. Without the ability to stop the car sending its own messages, the hackers’ own commands were usually overruled by the car’s system, or simply recognised as a conflict that caused the car to err on the side of safety and turn off the feature altogether.

In 2015, they had managed to tackle the problem by forcing the car into diagnostic mode, which allowed them far greater control. But most cars built since 2015 disable diagnostic mode when the car is in motion, meaning that the hacks can only be started when the car is travelling less than 5mph. “It’s a nice parlour trick,” said Miller. “But I don’t think it affects safety.”

So the pair’s past year has been spent working out whether that safety feature can be turned off. Bad news: it can.

The trick lies in working out how the various components talk to each other, and what they expect to hear over the Can bus. “There are times you can have conflicting messages and the car will do what you want,” Miller said. For instance, the way cruise control works in the Jeep means that, rather than sending a message saying “cruise control is on/off”, the bus instead says “the button to turn cruise control on is/is not pressed”. So when the message is inserted into the feed saying “the button to turn cruise control on is pressed”, it will enable cruise control without sparking a conflict internally (a breakthrough demonstrated with video of a panicky Valasek sitting in the passenger seat of an otherwise empty car rapidly accelerating to 40mph on a deserted rural road).

Google self-driving car
A Lexus SUV equipped with Google self-driving sensors during a media preview of the firm’s prototype vehicle. Photograph: Elijah Nouvelage/Reuters

For other controls, such as direct control of the accelerometer or brakes, that simple approach doesn’t work. But after analysing how the system determined conflicts, the pair found a simple workaround. Each message sent on the Can bus has a number, which increments by one each time. If the system receives three or more messages with the same number, it declares a fault and throws the whole thing out.

But what the pair discovered is that if the attacker’s message is sent first, with the correct counter, then the real message gets ignored. And if the next message is one further incremented, then the system never fully goes into lockdown, allowing the attacker to control the car: they can turn the steering wheel, hit the gas or slam on the brakes at any speed. Testing that one on the same rural road ended with the pair losing control and crashing into a ditch.

There’s one saving grace to all this though: ever since Fiat Chrysler fixed the hole disclosed last year, such attacks can only be done with physical access to the car. That’s the company’s response to the research: “While we admire their creativity, it appears that the researchers have not identified any new remote way to compromise a 2014 Jeep Cherokee or other FCA US vehicles,” it said. It’s true, but the pair warn others not to dismiss their findings that quickly: if they hadn’t found the earlier bug, then the cars would still be open to just this sort of attack, and it would be much more damaging.

“All these attacks would have worked if you had a remote attack,” said Valasek. “It would have worked in 2015.” Valasek and Miller offered some simple fixes that would make doing what they did much harder: code-signing would make reprogramming the onboard computers much harder, while better intrusion detection would throw up alarms earlier in the process, and not be fooled by simply incrementing a counter.

But for Tighe, the success of Valasek and Miller is proof of the opposite: not that the car’s internal processes should be more suspicious, but that it’s in their nature to be naive and open. Speed, he says, is of the essence in these systems. If the brake pedal says brake, it’s important that the brake pads not waste time checking that the message really came from the brake pedal: instead, they need to get on with the important business of braking.

Tighe thinks the solution lies in ensuring that unauthorised users cannot send messages on the Can bus in the first place, not wasting valuable time encrypting, signing or reverifying messages sent between the internal processes. It’s an approach he says works well for military hardware, where the time penalty is even fiercer.

It’s also one that necessarily involves putting all your eggs in one basket, however. No one thinks the car’s internal computers should be completely open to outsiders. The question is how much damage they should be able to do if they can find a way in anyway.

Of course, that assumes that access to the car’s internal computers is needed for a malicious attacker to do harm. Another group of researchers at Defcon presented their own form of car hacking which uses the very smartness of modern automobiles as the weapon.

Three researchers, from China’s Zhejiang University and the internet security company Qihoo 360, took aim at the many sensors that modern cars are equipped with – particularly those with automation features. “The reliability of the sensors directly affects the reliability of autonomous driving,” said Chen Yan. The death ofJoshua Brown, whose Tesla hit the side of a truck while the car’s autopilot mode was engaged, underscores that: the car failed to see the white truck against the bright sky and ploughed into it.

Chen and his fellow researchers showed that artificially creating a similar situation might not be as hard as it should be. The three subjected a Tesla Model S and an Audi equipped with self-parking features to a battery of attacks designed to leave them blinded in all their senses.

The wreck of the fatal Tesla crash, which killed its driver, Joshua Brown.
The wreck of the Tesla crash in which Joshua Brown died. Photograph: AP

Self-driving cars use a number of sensors, for various purposes. Ultrasound is used, like a bat’s echolocation, for determining the distance of close objects (useful for making sure you don’t hit a wall when reversing), while millimetre-wave radio is the core part of the radar component that lets the car map out the stretch of road immediately ahead of it (so you don’t end up rear-ending someone while using adaptive cruise control).

Those sensors, though, can be jammed, spoofed or muted, and cars don’t tend to react well to them. Jamming involves drowning the sensors out with your own data; spoofing tricks the sensor by blasting your own responses at it; while muting involves applying the same technique as used in noise-cancelling headphones to diminish the power of the original signal.

Unfortunately for the car manufacturers, the more complex techniques often are not necessary. Simply playing a loud enough ultrasound burst to drown out the echolocation, for instance, is remarkably effective. Rather than going into a failsafe mode and assuming that there’s an obstacle immediately in front, both the Audi and Tesla instead assume that there’s nothing for the next half-kilometre. Bravely or foolishly, one of the team demonstrated that fact by standing next to the car as it was driving towards the jammer. The car hit him, albeit slowly.

A similar, if more complex, attack worked on the radar. The team built a machine for generating radar interference, and were able to make a car simply disappear from the Tesla’s autopilot view. Importantly, neither attack led to a failsafe state: the car simply assumed there was nothing to see where it couldn’t see anything.

“Sensors should be designed with security in mind,” said Jianhao Liu, another of the three researchers, “so they should always think about intentional attacks, especially when the sensor is going to play a very important role in self-driving cars.”

Following the collision in May, Tesla pointed out that its autopilot feature was not intended for fully autonomous driving. Drivers should be ready to take the wheel at any time. The company has downplayed the sensor attack. “We appreciate the work Wenyuan [Xu] and team put into researching potential attacks on sensors used in the autopilot system,” a spokesperson said. “We have reviewed these results with Wenyuan’s team and have thus far not been able to reproduce any real-world cases that pose risk to Tesla drivers.”

The response in the car-hacking village to the research was mixed. While some were happy that the research was being done, others saw it as less worthy than the genuine hacking, arguing that spoofing sensors isn’t really any different from causing car crashes by shining a laser into drivers’ eyes.

It’s hard to find unanimity among hackers on anything. People who use “herding cats” as the apotheosis of a tricky organisational challenge have never had to herd information security experts. But the group of people united by the motivation to push computer security to its absolute limit seem to agree on one thing, at least: car hacking is here to stay, and sooner or later, you’ll be hit too.

 

Jeep owners urged to update their cars after hackers take remote control

Security bug allows remote attack of Uconnect system, letting hackers apply the brakes, kill the engine and take control of steering over the internet

Security experts are urging owners of Fiat Chrysler Automobiles vehicles to update their onboard software after hackers took control of a Jeep over the internet and disabled the engine and brakes and crashed it into a ditch.

A security hole in FCA’s Uconnect internet-enabled software allows hackers to remotely access the car’s systems and take control. Unlike some other cyberattacks on cars where only the entertainment system is vulnerable, the Uconnect hack affects driving systems from the GPS and windscreen wipers to the steering, brakes and engine control.

The Uconnect system is installed in hundreds of thousands of cars made by the FCA group since late 2013 and allows owners to remotely start the car, unlock doors and flash the headlights using an app.

The hack was demonstrated by Charlie Miller and Chris Valasek, two security researchers who previous demonstrated attacks on a Toyota Prius and a Ford Escape. Using a laptop and a mobile phone on the Sprint network, they took control of a Jeep Cherokee while Wired reporter Andy Greenberg was driving, demonstrating their ability to control it and eventually forcing it into a ditch.

Unlike the majority of hacking attempts on cars, the vulnerability within the Uconnect system allows cybercriminals to take control of the car remotely, without the need to make physical contact with the car.

Hackers take control of a Jeep Cherokee. Wired

The security researchers notified Fiat Chrysler nine months ago, allowing the car manufacturer to release a security update to fix the problem, which it did on 16 July.

However the update requires users to manually update their cars by visiting the manufacturer’s site, downloading a programme on to a flash drive and inserting it into the car’s USB socket. FCA dealers can update the car for owners, but the company is apparently unable to automatically update the cars over the internet.

“This update might not sound particularly important, but trust me, if you can, you really should install this one,” Miller said on Twitter.

Independent security expert Graham Cluley added: “Note that the researchers believe that, although they’ve only tested it out on Jeeps, the attacks could be tweaked to work on any Chrysler car with a vulnerable Uconnect head unit.”

“You should consider installing a security update that Jeep has issued for cars fitted with a model RA3 or model RA4 radio/navigation system.”

It is unclear whether the vulnerability within the Uconnect system is confined to US cars, or certain models.

A FCA spokesperson said on Wednesday: “Under no circumstances does FCA condone or believe it’s appropriate to disclose ‘how-to information’ that would potentially encourage, or help enable hackers to gain unauthorized and unlawful access to vehicle systems.”

“FCA released a software update that offers customers improved vehicle electronic security and communications system enhancements. The Company monitors and tests the information systems of all of its products to identify and eliminate vulnerabilities in the ordinary course of business. Customers can either download and install this particular update themselves or, if preferred, their dealer can complete this one-time update at no cost to customers.”

Annunci