You are viewing a single comment's thread from:
RE: AI Safety - Researchers at Facebook decide to shut AI which created it's own language
Either consciousness emerges from matter, or matter emerges from consciousness. I believe the latter, and therefore am not worried about AI becoming sentient.
I don't disagree with you, but you are assuming that any problems with AI are a function of that issue. The article states, for example "There's not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators. " The very fact this AI already circumvented English as the programmed way to communicate by developing its own language would indicate that threat is real. And that, itself, is a real danger, separate from any issues of sentience.
Yes, I see your point as well. In that scenario, the machines use a rewards based impetus, which is a cold logic that doesn't include negotiations within a more "humane" context. So the initial AI programming is the problem. The machine is programmed to win in a simple (greedy) way. It lacks abstract reasoning of a higher logical order, like a type of politeness or something. I'm sure this is a far greater programming complexity, and shows also how much more advanced the human brain truly is, compared to any of man's artificial intelligence creations. Blinded by Science :)
Thank you for your excellent comment, greatdabu. And while more sophisticated and "humane" programming may be an answer, we are still left with the huge issue of such programs being written by human beings, who even if they could avoid making mistakes in the programming, would also have to be paragons of virtue, to not fall prey to temptations of power, like in every other aspect of human behavior and enterprise. Playing with deadly fire... I have followed you, btw, please follow me (-:
I'm following you my friend :) Yes hopefully the human programmers will also be "humane"