An example I did to test some features of Octave and my Workstation’s performance. I used GNU Octave to generate the individual images. It took around 2 min on my box to create all images.
The code is quite simple and short and saved as ‘simple.m’:
x = -8:.2:8;
y = -8:.2:8;
[X,Y] = meshgrid(x,y);
R = sqrt(X.^2 + Y.^2) + eps;
i = 1;
for k = [-1:.05:1]
Z = sin(k*2*pi)*sin(R)./R;
fnprefix = "/home/zatoichi/Pictures/images/image";
fnindex = num2str(i);
fnext = ".jpg";
filename = [fnprefix fnindex fnext];
fig = surf(X,Y,Z);
set (gca(),"Zlim",[-2.0, 2.0]);
i = i+1;
This code generates 41 individual images like these (showing the first 4 only):
The US Navy destroyer «USS Fitzgerald» collided on Saturday morning 02:30 local time (17:30 GMT) with the civilian, Phillipine-flagged, container ship «ACX Crystal», which is run by the Japanese shipping company NYK Line according to Philippine newspaper «Manila Times».
The «ACX Crystal» is 222.6m in length has only been slightly damaged in the accident and continues its way to Tokyo.
The «USS Fitzgerald», on the other hand, with 155.9m of length not a small ship either, was strongly damaged and had to be brought to Yokosuka harbour.
There are a few hypes going on currently – one of them is the Artificial Intelligence (AI) hype. The good thing is, some people are at least starting to ask interesting questions.
From MIT Techreview:
The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.
That’s a bit misleading. The Algorithm is still something that was provided by a programmer and what it most likely does it correlate data from sensors (e.g. the front camera with actions of the driver).
The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? As things stand now, it might be difficult to find out why. The system is so complicated that even the engineers who designed it may struggle to isolate the reason for any single action. And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.
Of course it does match, because it copies what the human driver did. In driving, most situation are standard and the action of the driver are predictable, but as human drivers make mistakes in such situations, so will AI drivers.
The big problem lies in non-standard situations, in which even human driver may fail, because the situation is outside their experience.
The Iranian Ministry of Security presented video footage taken by surveillance cameras from inside the auditorium of the Islamic Shura Council on Wednesday 7/6/2017 – from Al Alam Website.
Uploaded for viewing convenience here:
Science Fiction series are full of memes – especially on Artificial Intelligence. This is one of them and stems from Battlestar Galactica (2004 version), showing what can happen when you wire your AI to a human-like body and let this one drive your car (or spaceship).
Science fiction for sure, right? Yes, but it’s not like current developments wouldn’t go in that direction