🔗 Theory: Tesla FSD Is Doomed
text
First published: .
Tesla "(Supervised) Full Self Driving"—also known as "FSD", "Full Self Driving", "Full Self Driving Beta", "FSD Intelligent Assisted Driving", "Intelligent Assisted Driving", "Autopilot", "Basic Autopilot", "Enhanced Autopilot", "City Streets", "Autosteer on City Streets", "Traffic Aware Cruise Control", "Smart Summon", "Actually Smart Summon", "Autopark", "4:20 Mode" and "Elon Take the Wheel Mode"—is a level 2 advanced driver-assistance system (ADAS) available in Tesla vehicles. A level 2 system has the ability to control a car's steering and speed, but the human driver is expected to perform all driving tasks, with the system only taking control for short periods of time, if and when it detects that the driver is not properly responding to a hazard, such as an avoidable collision.
If you've ever let go of the steering wheel, causing the car to stray off its lane, but then had the car realign itself within the lane, that was your level 2 ADAS covering for your temporary lack of attention. Similarly, if you've ever taken your eyes off the road and grabbed your smartphone in order to scroll between pictures of your childhood friends eating at restaurants on Facebook, and suddenly your car braked hard, that was your level 2 ADAS preventing you from rear-ending the car that you failed to notice was stopped right in front of you.
The power balance here is clear: the human is the one in control of the car, the system only kicks in to cover for them in case they fail to respond in a timely manner to an imminent danger.
For more than a decade, Tesla CEO Elon Musk has promised that Tesla's level 2 ADAS would quickly become a level 5 system, and has alternately claimed that it will be—or is already—safer than human drivers, providing no actual verifiable metrics to support this. In fact the only metrics available show that it is far, far less safe than a human.
In a level 5 system, the ADAS system assumes all responsibility for performing all driving tasks, and the driver (well, passenger) is free to scroll on Facebook as much as they want. Responsibility here also implies liability, for example in case of an accident. Musk also promised for many years that Tesla would start a self-driving taxi service, and that Tesla owners would be able to enroll their personal vehicles in that service, generating revenue. It's important to note that it's these promises that have made Musk the wealthiest man on Earth, and are currently keeping him as such, despite not materializing, like many of his other "guesses", for example solar roof tiles, hyperloops, electric semitrailer convoys, Mars colonies, diaper-changing robots, and many more.
Musk's extremely public claims with regards to FSD stand in stark contrast to Tesla's own filings with the California DMV and the NHTSA, which unequivocally state that FSD is a level 2 system and will remain that.
When customers originally ordered FSD back in 2016, this was how Tesla described the feature on FSD's order page on the Tesla website:

Note how the above text (intentionally?) avoids making it clear that the mentioned features had not been developed yet. Further, note how the only potential obstacle described isn't failure to develop them, but rather regulatory hindrances. Almost every word in this text is false. The system is not designed to make trips with no driver action, the driver must be ready at all times to control the car. There is no hands-free charging. There is no automatic calendar integration. There is no self park mode with the driver already out of the vehicle and on with their day. Summoning doesn't work and sometimes crashes into airplanes. Tesla had never filed any paperwork with regulators in order to register FSD as a level 4/5 system or to even test it as such. There is no "Tesla Network". And most importantly, the driver is still liable, even if FSD suddenly veers to the shoulder and crashes into an emergency vehicle.
Still, with the theory (or maybe aspiration. Or was it exaggeration? Presumption? Assumption? Just a guess?) that FSD would become an actual self driving system, Tesla decided to design its ADAS system to behave the exact opposite way than all other such systems on the market: instead of the system covering for the driver misbehaving, in FSD it's the driver who is expected to cover for the system misbehaving. For example, if the FSD software decides to go on a collision course with a fire truck, the driver needs to somehow realize it whilst there's still enough time to react. Ironically, this means that the cognitive demands from the driver are actually greater when using FSD than when driving by oneself. Previously, the driver needed to be attentive at all times, but if—as is bound to happen—they lost attention for a short amount of time, the system would probably have been there to cover for them. With FSD, however, not only does the driver need to be attentive at all times as any other driver, but there is no system to correct for their inattention. No, the driver now needs to predict, anticipate, recognize and react to the activities of an opaque AI system for whose behavior in any given situation no human can account. It's pure insanity. While human brains are extremely powerful machines, computers have the ability to make complex calculations much quicker and more accurately. For a relatively slow actor to be expected to correct for a much faster actor is ridiculous.
In its official documentation, Tesla itself admits that FSD "can do the wrong thing at the worst time". This sentence betrays the entire premise of the FSD feature and especially its dependence on a human fallback: the worst time is any time after said human can no longer correct for FSD's erroneous behavior. This can only mean one thing: sometimes, correcting for FSD actions will be impossible, and thus collisions and accidents unavoidable. And in the eyes of the law, it will be considered the driver's fault, not Tesla's. It's baffling that this system is allowed on US roads.
For a long time, there have been reports that FSD disengages a fraction of a second before it detects an unavoidable collision. The NHTSA reported as much after investigating many accidents, and a recent user test seems to have replicated this behavior too. Online commentators often take this to mean that FSD is intentionally programmed this way in order to shield Tesla from liability, as the company could always claim that its level 2 ADAS "was not active at the time of collision". Indeed, this is a common refrain made by Tesla in high profile accidents and official filings. Regardless, this behavior shouldn't be surprising, as it is entirely in line with the design of the system: when the system misbehaves or doesn't know what to do, it expects the driver to cover for it. This means relinquishing control of the vehicle back to the driver, or in other words, disengaging. FSD acted exactly as documented (but the opposite of as advertised): it did the wrong thing at the worst time. Good luck fuckers.
But can Tesla successfully develop FSD into a level 5 system? We don't really know if Elon truly believes Tesla is capable of doing so. After all, he claimed some years ago that it was already a solved problem. Clearly, it wasn't then and isn't now, so it's impossible to say whether Elon truly believes it to be possible or is just committing corporate puffery. Once again, we need to ignore what the CEO says on stage and on his $44 billion blog and look at what the company itself says in official filings: last month, Tesla finally applied for a ride-hailing service in California, but with human drivers! It seems like not even Tesla believes in its self driving system.
Why would Tesla apply for this permit when its ADAS is still only at level 2? It makes no sense! Well, except that this is entirely in line with how startups who fail to deliver act: if too much time had passed, and investors are getting anxious, and the stock is in trouble, they just release what they have, no matter how broken or unusable. They need to release just to be able to say they did, that they've tried, even if it means truly and once and for all exposing their failure. Eventually, theories and promises MUST meet reality, no matter how disastrously. Elon has been talking and talking and tweeting and xcrementing about this taxi service for so long that Tesla simply can't keep the charade going on based on nothing but beliefs and good intentions any longer. They'd already failed with the Vegas Loop—which was supposed to be a high-speed autonomous taxi service but turned out to be a very low speed human taxi service—years ago. Starting this service will, at the very least, buy them some time and plausible deniability.
Still, can they turn it into a level 5 system sometime? I believe they can't. Not without deleting everything and starting from scratch.
There are two main factors for Tesla's FSD problem. The first is hardware. Musk recently admitted that Tesla's previous hardware platforms were not capable enough, but that the most recent one (HW4) is (a claim he had made for those previous versions too). In a few years, Tesla will probably be forced to admit that even HW4 isn't capable enough to provide full self driving. Fixing this would cost insane amounts of money, especially if Tesla is forced to upgrade older cars (after all, those owners paid for a feature Tesla failed to deliver), but I believe most such owners are sure to forgive their Lord and Savior and just buy a new Tesla.
The second factor is software, and here Tesla is in dire straits even more, at least in my opinion. I've learned from decades of writing software, reading software, and working with many software companies, that software is often built on top of certain core assumptions. These assumptions are often about what is and isn't possible. For example, I worked with a company whose security product needed to "hijack" a certain UNIX socket whose path was hardcoded in the system it sought to secure. They decided to rename the socket file and recreate it themselves, but they assumed that it was not possible to do so while the original service was up and running, i.e. without stopping and starting it up again. Because of this core assumption, many lines of code were written, lengthy documentation written, comprehensive tests made, support staff trained, customer issues collected, bugs fixed and communicated, and more. Time immeasurable was spent because of this assumption, which was indeed false. They could rename at will, and the underlying system would have kept working, as file names were meaningless for the process once the file was opened, as the inode remained the same, and it was the inode that mattered.
Thankfully, this assumption was not particularly difficult to root out of the system once discovered to be false, as it was only relevant during installation, but that is not always the case. Sometimes the assumption is so core to the design of the system, that virtually every function or flow in the system critically depends on it being true. You see it all the time in aging software systems. As systems evolve, time passes and demands change, companies and projects often find themselves opting for a complete rewrite instead of trying to patch things up. This is not because the technologies used are outdated or hardware not powerful enough, it's because core assumptions of the system are practically impossible to root out without upending the entire project anyway. Many projects, especially in the startup world, begin development with practically zero demands for security and defensive behavior (I often rant about the lack of basic input validation in many startup software systems on this blog). When the time to put on their big boy boots on and start taking themselves seriously finally comes, companies may find that the core assumption that "everything will be fine and dandy" is so ingrained into the system, that fixing it could take years and may include sweeping rewrites.
I believe Tesla has such a critical core assumption, and it's what I've been talking about this entire text: the system is probably designed and implemented with the core assumption that the driver will always be there to take control and correct the system when it is wrong. I believe it was entirely impossible for Tesla's developers to ignore this assumption and that every line of FSD code depends on it being true.
In presentations, Elon likes to say that Tesla FSD has achieved a certain large number of total miles driven, and Tesla fans often say Tesla has an immense lead on other competitors in the self driving market because it had achieved so many self driving miles, despite other companies already providing self driving taxi services. But FSD miles aren't self driving miles. In fact, Tesla has achieved exactly ZERO autonomous miles, because a human driver was always there, in charge. FSD's core assumption that the human will correct it if it's wrong can be viewed another way: FSD is probably programmed to keep going unless the human driver tells it otherwise. Of course their ADAS collides with parked emergency vehicles, why wouldn't it? By not telling it not to, the driver has practically given the software their blessing to do so.
while (!driver->disengage) car->keep_moving;
Elon once said that all user input is error, but his own software doesn't seem to follow this principle. User input is absolutely vital to it. It even treats the lack of proactive input as input.
Eventually, I believe this is one of those assumptions that is so deeply ingrained that rooting it out without starting from scratch will not be practical, and starting from scratch will be disastrous for the project. Tesla FSD is doomed, and I'm not sure they'll be able to recover once it becomes public knowledge. And yes, I am aware of the irony that my analysis is based on an assumption too, or at least a presumption.
To whom it may concern (e.g. Elon's hardcore litigation team): This is an opinion piece. These are all guesses. "Individual puffery" if you will. The text can say the wrong thing at the worst time. Please don't sue me, Forbes has yet to declare me the richest man alive and they're unlikely to do that any time soon. Maybe by end of next year.