How to make unmanned cars much more intelligent?

If you are reading this article, it’s probably that you wonder why to associate intelligence with unmanned cars? After all, it’s only codes that programmers from Google, Tesla or Uber or smaller start-up wrote. Furthermore, and even if it were possible to make cars more intelligent, why bother about it? Aren’t they already operating quite well with embedded personal computers (PCs)? You might think, they are probably using PCs with a beefed-up version of their multicore processors for extra raw processing power. But all this would be missing the point. Current computer technology cannot provide a significant leap in processing performance that would enable cars to learn by themselves and adopt intelligent behaviors. To achieve this, they would need to supersede two fundamental problems: silicon limitations and parallel processing requirements.

Silicon chip limitations

Most of us are familiar with Moore’s law, which has been predicting accurately for the last half century, that the number of transistors on a chip doubles approximately every two years, along with their overall performance. The issue today is that transistors are now becoming so small that they generate energy leakage, inducing heat and lowering by the same token, this performance. Many specialists estimate that after the next 5nm transistor generation, chip manufacturers will need fundamentally to change material or technology to maintain Moore’s law.

The Artificial Neural Network (ANN) revolution

Most have heard about “deep learning” and how the Googles and Facebooks of this world with their Artificial Intelligence (AI) systems, can identify pictures of things or people and even beat the best world Go player. Behind these techniques are various types of algorithms and techniques, such as forward and backward propagation. Some of these newer technologies enable computers to learn by themselves, without any need for feeding thousands of examples of what must be inferred. The main issue with these software technologies is that they work mainly with ANNs and require massive parallel processing. Consequently, huge quantities of PCs and servers running in parallel must be used. For instance, the distributed version of Alpha Go, the system that beat the world champion Go player Lee Sedol, used 1920 CPUs and 280 GPUs (i.e.: Graphic Processing Unit). If we calculate the required power, by using 100 W per CPU and 200 W per GPU, this would represent about 1MW (to be compared to Mr. Sedol’s 20W)! In short, in order to use ANNs with the same type of sophisticated self-learning software, an unmanned car would need currently to be transporting a huge quantity of PC racks and a diesel generator to power them up.

If that would be the case, why not use a different strategy and employ wireless access to AI applications residing in the cloud? For instance, this is what an American start-up of unmanned vehicles called “Local Motors”, is doing. It uses IBM’s Watson (an updated version of the 2011 system that won the game Jeopardy) to run four APIs: Speech to Text, Natural Language Classifier, Entity Extraction and Text to Speech. However, if we look carefully, we’ll see that these APIs are used primarily to answer passenger questions, rather than actually operate the vehicle. The issue with a wireless access to AI applications, is that no safety homologation body will ever authorize the use of remote software for onboard safety critical applications.

Onboard GPUs to run ANNs

A GPU is capable of a hundred times more calculations per second than a CPU and is one of the reason for the emergence of these ANN technologies. The chip manufacturer NVIDIA is building specific chips (e.g.: Tesla P100), to be better at deep learning processing. The heart of their computation is the streaming multiprocessor which creates, manages, schedules and executes instructions from many threads in parallel. It has introduced for Self-Driving Vehicles what NVIDIA describe as their open AI car computing platform (called NVIDIA DRIVE™ PX 2). It is a palm-sized energy efficient module (10 watts), which includes AutoCruise capabilities with deep learning, sensor fusion and surround technologies. The multi-chip configuration with two mobile processors and two discrete GPUs can deliver 24 trillion deep learning operations per second; quite serious processing capacity!

Onboard neuromorphic switches

ANNs change fundamentally the way computers work. Unlike classical computers that work basically through a set of calculations and rules, ANNs can work via pictures and concepts and learn by example or by doing something. To use an image, in classical computers the learning process is top down, while in ANNs it is bottom up. Moreover, ANNs try to mimic the human brain, especially for memorization. Like the brain that uses its synapses to memorize, ANNs use the modulated strength of their nodes to memorize the forward or backward fed results.

Neuromorphic switches change also drastically how computers operate, as they emulate the same mechanisms comprised within the human neural synapses. Rather than having a binary answer, that is 0 and 1, we need to portray these switches as biological neural-like networks. Like the biological synapses that have different voltage values and can project inhibition or excitation properties, they can show multiple positive or negative values. For instance, IBM’s TrueNorth chip architecture provides the capacity for their neurons to assign four possible synaptic strengths across their inputs, with specific strength assignments. Such technology requires a specialized software to control the chip’s 256 physical neurons, from which flows information through these 256 connections. In fact, the 256 x 256 junctions (called synapses) form binary gates from which all connections fan out, to all other neurons in a massive parallel fashion. This type of hardware still has some limitations, as it cannot for instance, use unsupervised learning techniques. However, its consumption of only 70milliwatts, and the higher intelligence it provides (i.e.: the digitize equivalent of a small rodent brain), more than compensate for these limitations and according to IBM, makes TrueNorth a perfect candidate for driverless applications.

Cognitive programming

What I’ve been describing is a new world, not digital nor completely analog, but a world based on discrete values! What is still missing however for the advent of real AI is the notion of time. IBM has joined forces with a company called Numenta to develop a memory base algorithm called Hierarchical Temporal Memory (HTM) running on specialized hardware. Without digging too deep, what is important to understand is that information will be stored in a distributed fashion, but which will integrate a hierarchical organization based on abstraction levels, and storage rules based on time. If within the classical machines that I’ve described, the objective is to tweak the weights of the existing synapses to get the ANNs to learn, with these new types of intelligent machines, learning will emanate from the formation of new synapses. In other words, they will create hardware with massive interconnectivity, which can change their “plastic” network topology, just like human biological neural networks can create intelligence through their neural plasticity. With these new solutions, IBM openly seeks to create the equivalent of the human neo-cortex, that is, the site of human intelligence. Not only will it replicate physically the brain’s multi-layer structure, but it will also reproduce its neural processing properties.

The cognitive revolution

What I’ve described are some of the technologies that are drastically changing the computing world, by introducing functionalities that we associate with higher intelligence. Supervised or unsupervised learning, the capacity of abstraction, the integration of time in data storage, the faculty to work with images or concepts are a few of these “brain-like” properties that this continuum of new computing technologies will bring to each and every market sectors. The automotive industry will be no exception and will use these cognitive tools to better design cars, but also will integrate these new technologies within their onboard driverless control systems. Some of these technologies are already available and others will be shortly. The point is that within five years, car builders will dispose of the right hardware, software and system architecture to design cognizant driving entities, with self-learning capacities and inferring faculties, which will induce over time more intelligent and much safer driving behaviors than the ones we human can show today.

Scroll to Top
Scroll to Top