Does big data really lead to dangerous AI?
Many would argue that big data does not have to lead to dangerous AI.
In January 2017, a "secret" gathering in Monterey Bay, California, was attended by some of the information age's forefathers, including Google cofounder Larry Page. Their goal was to talk about and prepare to inform the world about the emerging risks in the rapidly growing field of Artificial Intelligence, or AI.
But, given that the majority of those in attendance were assisting in the development of AI in Silicon Valley, why would they want to warn people about its dangers?
These Silicon Valley pioneers believe that a future dominated by AI is unavoidable. Someone else will develop it if they do not. So the most they can do is get involved and try to educate people about the dangers.
But are these threats real? To find out, we must first investigate the history of mathematics.
At a conference in his hometown of Königsberg, Germany (present-day Kaliningrad, Russia) in 1930, mathematician David Hilbert proposed that all science could eventually be reduced to mathematics in a complete system – a system with a single unifying theory that covers everything, everywhere, with no uncertainty.
However, the much younger mathematician Kurt Gödel had demonstrated the day before at the same conference that there can be no absolutely complete logical system. Any logical system, including mathematical ones, is based on premises that cannot be proven within them – an outside authority is always required. This is why humans can not only discover but also create systems; in fact, this is how computer programming works: you create a system by defining rules for it from the outside.
In order for AI to be dangerous in the way that Silicon Valley believes, it must be a complete system. If it were complete, once it had all of the world's data – a primary goal of Google, as we've seen – it could teach itself simply from the data it collects, without the need for human input, quickly outpacing human intelligence and gaining dominance over us.
But we don't have to be concerned because, as Gödel demonstrated, all logical systems are incomplete. This means that if AI is truly a threat, it must be programmed to be so. It couldn't do it entirely on its own because it would require an outside authority – specifically, us humans who program it.
All of these fears of a tyrannical AI are simply paranoid thoughts of scientists and engineers about the consequences of their own allegedly superior intelligence.
Click on the link below to get his book.
https://amzn.to/3BhX0GG