AI as a Blackbox – It seemed like the right thing to do…

There are a few hypes going on currently – one of them is the Artificial Intelligence (AI) hype. The good thing is, some people are at least starting to ask interesting questions.

From MIT Techreview:

The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

That’s a bit misleading. The Algorithm is still something that was provided by a programmer and what it most likely does it correlate data from sensors (e.g. the front camera with actions of the driver).

The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

Of course it does match, because it copies what the human driver did. In driving, most situation are standard and the action of the driver are predictable, but as human drivers make mistakes in such situations, so will AI drivers.
The big problem lies in non-standard situations, in which even human driver may fail, because the situation is outside their experience.

There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right. Starting in the summer of 2018, the European Union may require that companies be able to give users an explanation for decisions that automated systems reach. This might be impossible, even for systems that seem relatively simple on the surface, such as the apps and websites that use deep learning to serve ads or recommend songs. The computers that run those services have programmed themselves, and they have done it in ways we cannot understand. Even the engineers who build these apps cannot fully explain their behavior.

This is non-sense unless you claim that engineers do not really know what they are doing and that the resulting system is no longer deterministic. You’ll always be able to figure out at what point the algorithm was and what the data was it used at that point.
Even with machine learning, Computers do not ‘program themselves’ – at least not yet. They don’t rewrite the code – the reach decisions with statistical calculations based on a certain (possibly huge) amount of data .
This means that every AI, that is running the same algorithm will reach the same conclusion giving the same input and data. Thus it boils down to actually doing the logging right.

This is Machine Learning (MI) it has nothing to do with ‘intelligence’, artificial or otherwise. It’s just a set of – although complicated – instructions that correlated input with stored data.
This works well in deterministic scenarios.  In scenarios based on (truly) random events it will not work any better than humans. The machine will just have to take a guess.

As a conclusion, if you cannot determine how a  machine reached it’s decision, i.e. if you cannot diagnose it, it is not fit for operation.