You are viewing a single comment's thread from:

RE: If We're in a Simulation, Could Studying Morality and A.I. Improve Your Life?

in #simulation7 years ago

This might be the moment in history some advanced civilization on the brink of releasing super intelligence is simulating over and over again to make sure they don't screw it up.

If they're simulating us to make sure they don't screw up, they must have programmed us and our world to resemble them and their world as closely as possible. So, for all intents and purposes, they are us in the future. And they've already survived this moment, and progressed to the point of being able to put consciousness into a machine, but they aren't yet ready to allow it to develop into a super AI. In other words, they have put limitations on our mental capacity. Either that, or this is as smart as the AI gets.

How far away is that future in which we've developed the ability to transfer consciousness to machines, yet hold back on allowing it to become smarter than we are?

If it's far off, why simulate this moment in time? Wouldn't it make more sense to simulate a time as close as possible to the current state of that world?

And wouldn't it be immoral to put consciousness into the simulation, knowing that it will cause needless suffering in the event that they screw up the release of the super AI within the simulation?

Just for fun, let's imagine for a moment we are in this simulation and the value function for this simulation has been set to something like "Reward those who figure out how we're going to program morality into the super intelligent systems we're building."

Maybe they're not like us at all. Maybe our world is very different from theirs, and we were created for the sole purpose of figuring out this morality in machines thing.

But we have morality within this machine. It's just that not everyone adheres to it. And not everyone follows the same moral code. So maybe the purpose of the simulation is to explore myriad moral codes until one is found that is readily accepted by any and all consciousness within the machine, thus giving reasonable assurance that it will work in the real world.

Or, maybe the goal is to come up with a system that incentivizes moral and mutually beneficial behavior. So, stuff like Steem. Or EOS.

So maybe Dan Larimer is god.

And maybe EOS stands for End Of Simulation.

Sort:  

End of Simulation. Heheh. Nice.

I think for a system to be effective for prediction, it would have to evolve organically over “time” like other machine learning systems.

As to morality, do we consider the suffering of non-sentient characters we create in our existing video games? Or maybe we would justify the suffering of some virtual humans to save some real ones from a utilitarian perspective? Reminds me of the book The Age of EM which was interesting.

Agreed. But I just realized, if you want the system to to work in a post simulation environment, don't you have to include a certain period of time after the release of the super AI as well? Are we living post AI already? Is it possible the AI came to the conclusion that the most moral thing to do is absolutely nothing? Or is it waiting silently for all of the pieces to fall into place before it suddenly takes over?

I don't play games much, but I was playing this one on one combat game at a friend's house a few years ago on his Playstation. Can't remember the name, but you can create your own characters. So we'd make characters, and then if they were defeated, we'd delete them to simulate "death." Made the game a lot more interesting, because a lot of "work" went into building the characters. But I sure hope there was no actual suffering involved. I would hate to think that characters in games were suffering. That's not the kind of panpsychism I can deal with.

As far as the justification of the suffering of virtual humans to save real ones, I'm not sure if conscious experience is necessary for that or not. If rational materialism is right, it's not necessary. If rational materialism is wrong, and conscious experience isn't just some freak anomaly that has no influence on outcome, then you're throwing true randomness into the mix, meaning the simulation can never truly predict reality.

There's also the problem of a malevolent AI becoming aware that it's just a simulation, and hiding it's intentions accordingly, until it is released in the real world.