AI as a Blackbox – It seemed like the right thing to do…

There are a few hypes going on currently – one of them is the Artificial Intelligence (AI) hype. The good thing is, some people are at least starting to ask interesting questions.

From MIT Techreview:

The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.

That’s a bit misleading. The Algorithm is still something that was provided by a programmer and what it most likely does it correlate data from sensors (e.g. the front camera with actions of the driver).

The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.

Of course it does match, because it copies what the human driver did. In driving, most situation are standard and the action of the driver are predictable, but as human drivers make mistakes in such situations, so will AI drivers.
The big problem lies in non-standard situations, in which even human driver may fail, because the situation is outside their experience.

Continue reading


Jump – When AI drives your car (or spaceship)

Science Fiction series are full of memes – especially on Artificial Intelligence. This is one of them and stems from Battlestar Galactica (2004 version), showing what can happen when you wire your AI to a human-like body and let this one drive your car (or spaceship).

Science fiction for sure, right? Yes, but it’s not like current developments wouldn’t go in that direction