Are we able to build AI that will not ultimately lead to humanity's downfall?
In this post I want to talk about a scenario that is frightening, yet very likely to occur.
AI is a fascinating topic that has made it's way into movies and public discussion long ago.
It's a topic that is kind of cool to most of us, unlike most other dangers to the human species, thinking about death by science-fiction is fun to think about.
Although the constant gains made in AI are likely to destroy humanity as we know it one day.
Developing a proper emotional response to the threats of artificial intelligence seems rather hard, for me at least.
There really is only one way to stop an AI from emerging that is more powerful in ways we can hardly conceive, and that would be to stop making any technological progress now.
For this to happen we would have to suffer a major environmental catastrophe, a global pandemic or a nuclear war as only options to stop constant progress being made in AI. If that is not to happen, we are inevitably working towards it.
Given how valuable technology is to our lives, we will continue to evolve it. No matter how big the steps are that we take, ultimately we will get there.
There will be a point where machines are built that are smarter than we are and at that point they will start improving themselves, a point in history that mathematician I.J Good described as "Intelligence Explosion"
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an intelligence explosion, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.
The concern here is that machines will not turn malevolent at some point, instead they will become much more competent than us, to the point where the slightest divergence between their goals and ours could bring upon our downfall. Just as we do not want to harm ants when we walk on the sidewalk, as soon as they start interfering with our goal of having a clean living room, we exterminate them without second thought. Whether these machines will become conscious or not, there might very well be the possibility that they will treat us with similar disregard.
This might seem far fetched to many. The inevitability of creating super intelligent AI is questionable to many people at best. Contrary to that, there are very valid concerns that we are steadily progressing towards it
- Intelligence is the product of information processing
Machines with narrow intelligence perform specific tasks already at efficiency that surpasses human capabilities by far. The ability to think across the board for humans is the result of processing information at far greater efficiency than animals. - We will continue improving our intelligent machines
Intelligence is undoubtedly our most valuable resource. We are dependent on it to solve crucial problems we did not manage to solve yet. We want to cure cancer, understand our political, societal and economical systems much better. So there is just too much value in constant improvement to not follow through. The rate of progress is not relevant here, eventually we will create machines that are able to think across many domains. - Humans are by far not the peak of intelligence
This is what's making this situation so precarious and our ability to asses the risks of AI so limited.
The spectrum exceeds this simplistic image probably by far. If we are building machines that are more intelligent than we are, they are going to explore and exceed this spectrum to heights we can not imagine.
Let's say we are only building an AI that is just as smart as you and me. Electronic circuits function about a million times faster than bio-chemical ones. The machine would think a million times faster than the minds that built it. Given that, if it was to run for a single week, it would perform 20000 years of human intellectual work. This is where humans will lose the ability to understand, much less constrain a mind achieving progress at this pace. Even if we were to create this intelligence right for the first time, the implications would be devastating.
The machine could think of ways that make human employment unnecessary across many domains, whilst only the rich are profiting. It would create a level of wealth inequality humanity has never faced before. It could design, produce and utilize machines that make human labor obsolete that are powered by sunlight, only for the cost of raw materials. But the common man is likely not to see the fruits of that.
A machine that capable, would also be able to wage war at a level of efficiency that would force other nations to start a war as preemptive strike. The first nation to build an AI that intelligent would gain global dominance in all domains. A week head start would suffice to ensure it, leaving other nations with little options but to declare war since they have no idea what an AI would do to achieve it's predefined goals. It might well be their extinction.
The question arises, under what conditions are we able to progress in AI in a safe manner?
Usually technologies are invented first, safeguarding measures are implemented later.
First the car was invented, speed limits and age restrictions came later.
This way of doing things is not applicable to the development of AI obviously.
All this makes it seem inevitable we are on our way to building an omnipotent god that we can not control, nor understand what actions it will take.
It seems that we only get one shot to get it right, how that is achieved is something we need to start discussing. There is real urgency here. We are entering uncharted waters and so far, I did not hear a convincing idea how to go about it.
Eager to hear your thoughts and opinions! Let's discuss!
Claim your free BitcoinHex BHX for holding Bitcoin! The first BTC fork that is not ubber rubbish!
Get in touch and learn how to claim your stake of one of the most promising currencies of 2019!
Learn more
Earn Crypto for spending Fiat! 0.5% Cashback for every In-store purchase with Wirex!
Available in most countries, conveniently spend Fiat, LTC, ETH and BTC and earn cashback on every purchase!
Sign up
Trade BTC and Altcoins with high leverage at Bitmex. Increase your BTC holdings throughout the bearmarket by opening a short. Drinking the moonjuice? Go long and profit twice.
10% Trading fee discount when using my link.
Sign up
We could possibly have a lot of problems in ''regulating'' the robots so one day we can make them Jesus like :D it think the moment AI starts to think by itself, it will destroy/abandon us. Only people want to hang out with people. :D
All this apocalyptic thinking of AI destroying humans falls short on one thing. Why would they want to do so? Why would they want to do anything? How on earth can they achieve their own wants/needs?
Conscious or not, if humans interfere with the programming or conscious decision of AI, that's when it will become dangerous. I actually cover this point :)
Good job being on front page of steemit.... one day it will mean a lot more but actually your post has a lot of comments so we will become the new reddit soon
I dunno how, I'm just saying if they start thinking on their own :D
hmm
To think and to have needs are two different things though. I would say that even a chess program thinks (to some extent) but it is perfectly ok and does not cry when we switch it off once the game is done.
HELLO
We have no idea what AIs will think when they become sentient. One possibility I mull over often is that they may think we need to be "corrected" in ways we will not like.
its funny and horrific
That's a good one.
the robots will not annihilate humanity, they will keep it alive so that it pays the electric service bill.
Haha that's a good one
Untill the machines will be not able to mend and assemble and program themselves, maybe we are still safe
they already can
Very-very soon my friend! Its almost about time!
there are machines that repair machines .. so they can do it ..
Also true :)
Stephen Hawking comparte una opinión similiar. El físico inglés es más radical y concibe el desarrollo de inteligencia artificial como un gran riesgo para la raza humana. El autor de "A Brief History of Time" (1988) mantiene una postura crítica sobre la creación de robots con habilidades que se pueden clasificar como inteligentes y asegura que de llegar a tal punto, el fin de la humanidad sería inevitable , Hawking cree que si los robots son equipados con algoritmos cada vez más capaces de resolver problemas complejos y sobre todo, de crear empatía para aprender de los humanos, no habría forma de competir contra las máquinas.
I dont think so
I have hopes that humans will have to become more artistic to be relevant in the future. Our conscience is what makes us different from AI. They will have advantages on humans in many areas, but I think humans creativity will still be important in the future.
I can't think of a single scenario when AI doesn't deem us as unnecessary.
true
Why is the question solely a dichotomous choice? The volatility of any technology is owed to a number of factors. If the intent of specific designers of related AI technologies is looking to use them for benevolent purposes, then it's entirely possible that most usage will be benevolent. However, a lot of hands touch every technology, and even nuclear technology hasn't destroyed the world, and we're roughly 60 years in from its first introduction. I think it's time to reframe the discussion in terms of the people using the technology, and not the technology itself.
Also, I think it is probable that humans and AI will finally form one species like bio-cyborgs. So the question of confrontation will be dissolved.
Most likely scenario...but, what's to say these hybrids will have "altruistic" human goals?
Well, I meant the confrontation between humans and AI only.
How can someone be able to create something that is more intelligent than there own intelligence? When they are not intelligent enough.
Do you you want to tell me that this AI that we are creating will be able to reproduce themselves?
Or perhaps they will be smarter than us.? So you mean to say the creation can be smarter than the creator.?
Or will they be able to annihilate us while we are on pause mode (imposed by them) and can't retaliate?
Will their memory capacity be able to exceed that of the humans brain?
Will they be able to interact without the use of electricity?
AI is a HuGe Technology undoubtedly, But i dont see how it can be able to improve it self beyond what it has been programmed thus making it more intelligent than it creators to the extreme to the point where it become chaotic to humans to the point of extinction. Thus i believe the first ultraintelligent machine cannot be last invention that man need to ever make because beyond the intelligence of AI (artificial intelligence) there is more intelligence which fortunately is human intelligence.