AV and Predictability 🔗
The thing about driving, is that it's all about predictability. The entire point of all traffic laws is to make traffic predictable. Humans are difficult to predict, expect for certain aspects in big enough boundaries. So the laws are there to eliminate unpredictability as much as possible.
Humans may not be easily predictable, but they are flexible and adaptable, which is very important when driving, because at the end of the day, traffic can be quite unpredictable at small ranges. Even flexibility, though, needs to be predictable: given a certain unexpected situation, we should be flexible enough to react, but our reaction needs to be predictable. Take all those idiots that like to brake check other drivers: they introduce an unexpected issue to the other driver's drive, and predict that the driver will slam on the brakes and not ram into them (and I do admire those that do anyway).
The thing about unpredictability, is that it cascades, and cascading failures are often catastrophic failures. I mentioned in my earlier rant that AVs depend on many factors, such as local hardware and software, remote servers, and GPS satellites. Let's look at how unpredictability cascades: an AV's computer needs to make a certain request to a remote server in order to function. So the vehicle's program, written in a certain programming language, makes a call to the operating system (written in a certain programming language) using its API, which in turns makes a call to the network adapter using its API, which accepts the request and sends it to the remote server, where it's accepted by its network adapter, parsed by its operating system, handled by its computer program, which makes a request to a database program, that makes a request to the OS to access the filesystem using its API, parses the result, and prepares a response that will go through the entire way back to the vehicle's computer program. And this is an oversimplification, mind you. When the server's filesystem reacts to the database in an unpredictable way, it is sometimes the case that the database program will also react in an unpredictable way, which in turn causes the server's computer program to react in an unpredictable way, and so on it cascades until your car's computer program loses its shit and kills you.
Most statically/strongly-typed languages are out too, probably. Go, for example, is a case study of cascading failures. It happily lets you ignore errors, a practice which always results in cascading failures that are extremely difficult to diagnose, and neither the standard library documentation nor the top open-source Go projects teach you how to properly handle errors, which remain one of the language's most divisive aspect. By the way, not even the documentation of Go's standard library can say what you should do if a database cursor fails to close. C? I can practically feel myself crashing already. Furthermore, almost all of the commonly used programming languages have "side-effects". Running a function with the same exact input over and over again will not always produce the same output, because there's often a source of unpredictability, which can be as basic as the current date.
Of course, you can say that writing safe and predictable software in these languages is possible, even if hard. And you'd be right. You can adopt very strict development methodologies and processes; you can hire better programmers; you can utilize static code analysis to catch the most egregious kinds of mistakes; you can do fuzzy testing; and more. We all know, however, that this is not what's happening at all those tech companies these days, at least not well, and developers are an unpredictable bunch.
Unpredictability (and complexity) is the main reason why I distrust autonomous vehicle technology, and anything that utilizes Artificial Intelligence and Machine Learning. The latter two definitely have their uses cases, but I fear they are being shoved into places where they shouldn't be. I will probably never trust a car whose behavior depends on an AI system, because such a system—in my eyes—is entirely unpredictable. I have no idea what it's going to do. This is why I don't buy Tesla's spiel about Autopilot and FSD decreasing the driver's cognitive load. On the contrary, I think they increase it, because the driver—who serves as a fail-safe for the AI-based system—needs to be able to predict incorrect AI behavior, or otherwise react and correct it virtually immediately. That's just not possible.
There are other factors to take into account for our choices, of course. They all eventually somehow relate to safety. Predictability is required because it increases safety. Good performance is required because it allows the vehicle to react quickly to unexpected situations, thus increasing safety.
So I don't really know which programming language to use, but I do know that I would make reduction of sources of unpredictability my top priority, or one of them. Not only to make my autonomous vehicle safer and more predictable, but to also make it easier to diagnose when a failure does occur. Are you really able to tell with 100% accuracy what the cause of a failure was in a system whose based on machine learning and AI?
Maybe I should just read NASA's Software Design Principles, if there's anyone who knows unpredictability, it's them.