Everyone's being paranoid about AI these days. And the less people are familiar with the subject, the more concerned they are. Even some of my tech friends, even - believe it or not - software engineers, the ones have a very vague idea of how AI/ML works, are expressing their concerns.
"The singularity is just around the corner!"
"AI is gonna take all our jobs!"
"Beware the almighty Machina!"
Here are my 2 cents on this. Actually, my 3 cents to be precise.
Vector machines in multi-dimensional spaces, function approximation, regression etc. - all these things have been known and used for hundreds of years. Even the great and terrible her majesty Neural Network has been around since 1943.
The majority of ML stuff is based around very basic math. Not even the advanced stuff like, I don't know, differential equations, or tensors, or "string theory", or some other complicated shit... It's based on a thing called "linear algebra" which is studied in the first year of every engineering college in the world.
Remember the 90s? The "386" CPU came out and it was 2x faster than the "286". But wait, in just a couple of years we got the "486" and it was even faster! Hold on, another 2 years pass and - holy shit - we have a Pentium now. Then a Pentium Pro/Pentium II. Followed by Pentium III... And don't forget, we also jumped from 32bit to 64bit architecture at some point I don't even remember when... And every time it was a huge leap forward.
Now we have reached the point, where increasing performance through transistor size is getting really tricky. Simply because it's hard to build a stable production process that allows for massive quantities of chips. With 15 million of tiny microscopic transistors shoved into 1 square millimeter it is statistically impossible to keep the error rate at zero.
That is the reason the "per-thread" performance is plateauing.
And the quantum computers are still 10-20 years away from mass production.
One word. The DATA
Because for the first time in the history of humanity we have access to enormous amounts of data. Thanks to the Internet, "the Cloud" and the social networks. Thanks to the millions of pictures we post, thanks to the terabytes of content we generate, thanks to the 281 billion emails sent every day. Even in a small company like ours - we store tens of millions of support tickets in the help desk app we host for our clients.
We, the humanity, finally have the means to analyze all that data. Forget analysing - for the first time in history we finally have a way to actually store all the massive amounts of data we generate. And it's not even that expensive. Just think about it: my S3 monthly bill for all the file attachments in the above mentioned SaaS app is somewhat around $20 a month. That's millions of files attached to support tickets of all our clients in 10 years - for the price of three pints in a pub.loads of crap
We can finally play with all that data, pairing it with the math we already have. And that's what 90% of ML researchers are doing these days. They throw some random data at some random AI algorithm to see what comes out.
That's why.
Yes. Well, kinda.
Since DATA becomes the most valuable asset (because algorithms are well-known and CPUs can be rented). The one who has more data - wins. And the more data you have - the more efficient you are - the more revenue you generate - the more access you have to EVEN MORE DATA
.This is a vicious circle. Actually, more of a flywheel. Companies that already have the unfair advantage will keep increasing this unfair advantage. And AI speeds this flywheel up enormously.
Thus, it's not the "singularity" we should be afraid of, it's the dark cyberpunk reality where everything is controlled by huge AI-powered corporations and everyone's being watched.