top of page

The Future of Autonomous Driving

The future of self-driving cars is deterministic rather than stochastic

​

By Mrinmoy Das, ex- Head of Risk – Global Emerging Markets and Foreign Exchange: NOMURA

 

Having spent the last 20 years in the financial industry as a risk manager, I see eerie parallels between what happened with stochastic modelling in the financial industry 30 years ago and what is happening with stochastic modelling in the self-driving car industry now. There are important lessons from the financial industry which the automotive industry and its regulators need to incorporate in their worldview so that same mistakes are not repeated.

 

LTCM (From Wikipedia)

 

Long-Term Capital Management L.P. (LTCM) was a hedge fund management firm based in Greenwich, Connecticut that used absolute-return trading strategies combined with high financial leverage. The firm's master hedge fund, Long-Term Capital Portfolio L.P., collapsed in the late 1990s, leading to an agreement on September 23, 1998, among 14 financial institutions for a $3.6 billion recapitalization (bailout) under the supervision of the Federal Reserve. The fund liquidated and dissolved in early 2000. Jorion identifies several inputs and assumptions used by LTCM that contributed to a significant underestimation of risk.

​

In the self-driving industry, we again have well-renowned and well-funded players using stochastic modelling to model human behaviour. However, these efforts have not answered few fundamental questions on the choice of model – more on this later.

 

 

What is the problem industry is trying to solve?

 

The basic problem is that of point-to-point mobility in the context of existing infrastructure. The way our society is organised, we spend resources (time, money and energy) in arriving at our destination so that we can unlock some utility. Utility can be economic, i.e., going to work, etc., or social, i.e., visiting family, taking care of a loved one, etc. Mobility is essential to the functioning of our society.

 

Resources (time, money and energy) spent on mobility is generally viewed by society as “dead” investments that does not result in any benefit other than delivering us to the point where “valuable” utility can be unlocked. Over human evolution, society has tried to minimize this “dead” investment by inventing faster modes of transportation. This has largely been successful to date by increasing linear speed. However, further increases in linear speed has failed to minimize the “dead” investment because it has come-up against a natural barrier.

 

The natural barrier is gridlock. In addition to increasing linear speed, society has moved naturally closer together so that physical distances, and hence the “dead” investment, is minimized. This is the genesis of the modern urban civilization from rural. As society has moved physically closer together, the space remaining for movement has decreased. Hence, the problem of “dead” investment has moved from linear speed to optimization.

 

Optimization has been attempted by urban planners in various shapes and forms. Intra-city road infrastructure, rail infrastructure, hub-spoke models, etc., have been tried with varying degrees of success. Societies that have done it well have prospered, e.g. Singapore. These avenues have largely been explored and society is reaching the limits of such infrastructure development simply because there is no more space left. This leads us to the next frontier in “dead” investment minimization.

​

The next frontier is making the “dead” investment “valuable”. For time spent in-transit to be valuable, it must be available to the user without distractions and in a continuous block. If the user is engaged in operating the mode of transport (driving), it cannot be used productively. Human beings are not natural multi-taskers. Our brains take time to switch in and switch out. And it takes effort. Imagine listening to the first two minutes of a song, stopping it, switching modes of transport, and then listening to the remainder. Mode-switching is friction that is a cost to the user. This is the genesis of the requirement for point-to-point travel in self-driving cars where machinery (cars) take the drudgery of transport out and free up the human for maximizing its utility.

 

Is self-driving a goal worth having? Absolutely. A significant part of global income is spent on mobility. A premium is placed on modes of transport that are point-to-point and require less intervention (e.g. Taxis vs. bus). A car in Singapore costs more than $100k. If such transport could be made cheaply, it would work as a powerful economic stimulus to society where the income saved will be used in consumption. Further gains can be had from efficiency by using commute time productively.

 

To summarise, the requirements for future human mobility are 1) it must be automatic – not requiring human attention and 2) point-to-point – not requiring mode switching. This definition is important as it will drive the choices that need to be made. “Self-driving” cars that require a human intervention intermittently, i.e. a safety driver, does not satisfy the first condition and hence cannot deliver the benefits of automatic mobility.

​

Machine Learning

 

The current model of choice for solving self-driving problem is machine learning. Machine learning is a statistical method for drawing inferences from a sample dataset by using available data. This is done by calibrating parameters in the model. These parameters are then used for making predictions for out-of-sample data. Traditional statistical analysis had to rely on analytical tools to calibrate model parameters.

​

Machine learning’s claim to fame has been to automate this calibration process. However, that claim is only partially true – there are parameters in machine learning that still need human intervention – it is called Hyperparameter Tuning in industry parlance. This is more of an art than a science. Machine learning done at scale is a huge progress over traditional statistical methods. However, fundamentally it is still a statistical method – it has the same advantages and drawbacks.

​

The advantage of machine learning implemented on modern compute power is its ability to process large datasets in reasonable time frames. Training can be economically conducted on larger server farms and predictions about the environment can done in real-time from in-vehicle computers.

​

One fundamental feature of statistical models are the types of errors it makes in prediction. Traditionally these have been classified as Type 1 errors or Type 2 errors. Type 1 errors are false positives whereas Type 2 errors are false negatives. One of the problems of building machine learning algorithms is balancing the two types of errors through hyperparameter tuning. This is a choice that the algorithm builder must make and depending upon the choice, the model makes more or less of each type of error.

​

Calibration technique assumptions play a vital role in understanding statistical models. If you understand the assumptions used in calibration, you can have a reasonable understanding of the model output under different conditions. Traditional statistical analytical tools have well understood assumptions and hence, its output can be understood in the context of the assumptions being made. Machine learning has automated the calibration process and hence, assumptions used in building the model are opaque.

​

So why can stochastic modelling survive in finance but not in the automotive industry?

 

The answer lies in two concepts – 1) the cost of making a mistake and 2) stress testing

​

The cost of making a mistake in the finance industry is a financial loss. These can be very large – so large that it has the potential to bring large swathes of the global economy down. LTCM and the Lehman Brothers collapse are two examples of such crises. However, in the automotive industry, the cost of making a mistake is loss of or grievous harm to human beings. Society views these two types of consequences quite asymmetrically.

​

After experiencing near collapse of the world economy through model failure, regulators in the finance industry now understand consequences of model failure and are addressing the risk in two ways. They have mandated financial firms to set aside capital to absorb losses – thus creating a shock absorber for the economy in general. They have also created an industry of model validation experts who audit models. Every model used in financial firms is inspected and audited. There are model creators (front-office), internal validators (risk management units), external validators (external audit firms) and regulatory auditors. This daisy chain of model validation is hoped to quantify the magnitude of model failure so that firms can be forced to hold capital to absorb such failure. One could argue that such checks and balances existed before the Lehman crises but still did not prevent the crisis. This only highlights the difficulty in truly comprehending model assumptions. What may appear as reasonable assumptions under one state of the world may appear egregiously adventurous in another.

​

We should remember that such model failures happened with relatively simple to understand Gaussian models. Machine Learning algorithms are exponentially more complex – validating and understanding assumptions of these models are an impossible task. If such comprehension is not possible – it is not possible to fix the model after a failure. Society works on a feedback loop, if a system (product or model) fails, we investigate to arrive at a theory of underlying cause and then fix the underlying cause to improve the system. If a system failure cannot be analysed, especially one with a high cost of failure, the system must go back to the world unchanged and failure will happen again. This will not be tolerated by society. Boeing can still put the planes in the sky because each incident is analysed and fixed. Same cannot be said of self-driving cars based on machine learning. The learning from financial industry is that more complex is not necessarily better.

​

Stress testing is another way of simulating model failure. Financial systems can be stress tested because the underlying independent variable(s) exist in a continuum. E.g. if a three standard deviation move in USDJPY is 3% then we could say that a 6% move is “stressful”. No such continuum exists in the automotive world. Observations of accidents does not extrapolate to the circumstances of the next, potentially more devastating, accident.

​

So, why is the machine learning paradigm so attractive to vehicle manufacturers? It has the advantage of having, what I call, “Inferential Leverage”. A large number of inferences can be drawn from a data sample very efficiently and economically. However, such leverage comes with a cost – tail events fall outside its remit. Just like financial leverage, it will perform well for some time followed by catastrophic failure.

​

In summary, stochastic modelling can exist in the financial world because consequences of model failure are limited to financial loss, the risk can be mitigated by building capital buffers, stress testing can be used for testing model failure and models can be qualitatively audited for suitability. In the case of self-driving machine learning cars, consequence of model failure is loss or grievous harm to human life, the risk cannot be mitigated, stress testing is not possible, and neither can these models be audited. Regulators will need satisfactory resolution of these issues before they allow self-drive cars on the road.

 

Till then, just like LTCM, impressive results will be demonstrated in the short term or sheltered environment, with massive failure in the real world.

​

So, what is the future of human mobility?

 

There are two choices for human mobility – use complex statistical models or simple deterministic ones.

​

The future of human mobility will be based on deterministic models – models with no “long tails” or “edge cases”. Decisions will be bounded by well-defined rules. It will be automation of how we travel on roads today. Roads have evolved a basic set of rules that, if enforced rigorously, leads to safe travel.

​

Deterministic algorithms have the added attractiveness of being modularized or broken down into simpler elements that can be solved independently from other elements. In stochastic modelling, modularization is difficult. Errors in model specification will need the entire model to be recalibrated. These might require previously validated features to be retested because model parameters have changed.

​

Such deterministic models can be audited and built in small incremental steps. Each element can be tested in isolation. Errors in model specification can be easily identified and fixed. This will provide assurance to regulators that model failure can be corrected.

​

Only deterministic models have the possibility of being adopted by society at large. In a thought experiment, imagine that you have bought a self-driving car that has been trained using machine learning. The stochastic model has been trained to such a degree that it only requires you to manually intervene once a year (i.e. a very long tail). However, if you fail to intervene, the car behaviour will be unpredictable, and it might enter an accident and kill you. As a user, what would you do? Is your attention fully away from the road and driving or, are you waiting for the once-ayear call to control your car? A normal person would choose the latter because preservation of life trumps everything else. This fails the basic premise of the self-driving promise – you can take your attention away from the road and use it productively.

​

In summary, deterministic models have an advantage over stochastic models in 1) being auditable, 2) assumptions can be articulated and validated, 3) specific condition can be introduced or edited, 4) can be modularized and built in parts and 5) most importantly, produce repeatable results. All these will provide comfort to society that the self-driving infrastructure can be trusted.

bottom of page