If We're in a Simulation, Could Studying Morality and A.I. Improve Your Life?

in #simulation7 years ago (edited)


(source)

Last year, I wrote a post about My Simulation Theory Hypothesis. This past week I was considering a possible corollary and wanted to share it with you to get your thoughts.

If you have a couple minutes, please give that post a read first.

The main idea I want to focus on is:

If there is only one moment in all of human civilization which could end all biological life and replace it with computronium (see Bostrom's book for more on that), this might be it. This might be the moment in history some advanced civilization on the brink of releasing super intelligence is simulating over and over again to make sure they don't screw it up.

So if you read my post and you buy that for a moment, what do you think of this possible corollary?

Your life will significantly improve as you study and discuss morality and artificial intelligence.

I know, it's a crazy idea and probably not something worth taking too seriously, but I find it fun to think about this stuff. I hear people say things like "make a request to the universe" when talking about things they want in their life or they get into positive thinking or The Secret or some other New Age or religious version of "name it and claim it."

Currently, I think much of that is ridiculous. The best explanation I can come up with for peoples' lived experiences is how the sub conscious mind (System 1, as Daniel Kahneman calls it) does "work" in the background and may impact how System 2 changes our conscious thoughts and actions which directly impacts our lived experiences to obtain what we want.

Just for fun, let's imagine for a moment we are in this simulation and the value function for this simulation has been set to something like "Reward those who figure out how we're going to program morality into the super intelligent systems we're building."

I hear many successful, intelligent people talking about the very serious concerns surrounding the development of super intelligence and the moral restraints it needs. Could it be they are being rewarded by the simulation?

Could your life improve if you start studying morality?

I think so. As we better understand how to improve the world, we improve our own lived experience in that world.

As to if you learning to code and understand A.I. systems will improve your life... well, that one's still wide open.

Before we start taking this too seriously, let's have a laugh from one of my favorite YouTube Edutainment stations, exurb1a:




Related Posts:

P.S. I'm heading to Anarchapulco tomorrow morning, so I probably won't be very active this week. I also missed putting out the exchange transfer report for last week as I was working a booth for the SunshinePHP conference. If I can get my Bitcoin Tweets from 2013 project done, I may get around to posting those while I'm not around as much.


Luke Stokes is a father, husband, business owner, programmer, STEEM witness, and voluntaryist who wants to help create a world we all want to live in. Visit UnderstandingBlockchainFreedom.com

I'm a Witness! Please vote for @lukestokes.mhth

Sort:  

It’s been a while since I lastly commented on some of your posts. This one has really caught my attention so i should make up for the limited engagement :P. I have read all of the related articles you have linked along the way, but none of them has tackled the argument that has led me to "believe" (i say believe because there is no way how to know such a stuff anyway) that we do not live in "matrix".

So basically all of those argument are highly logically structured. They in fact do lead, through brilliantly paved logical tree, to a really strong assertion that we probably are part of "virtual reality". This whole logical tree has one major flaw from my perspective.

The prerogative human nature

I think that most of us will agree on "a fact" that objective reality does exist (whatever it is). Most of us will also agree on "a fact" that this objective reality is bound to be perceived by our subjective realities - and that is the stumbling block of the theory supported by Elon.

The whole theory stands on a premise that the beings that have created the virtual reality were "thinking" in the same way as humans do. It’s extremely hard for people to imagine "any other way" of thinking. The dilemmas of the mankind are more or less set for quite some time (with few hard/soft forks along the way) and humans naturally search "for the HIGHER purpose of things". That is the bias we all to an extent share (but people raised in religious environment like you tend to be biased in such a way even more).

Easily put, there is no reason to think that different unimaginable life forms would create matrix-like realities. Humans would...Other beings? No one knows. Therefore I think that statement like "we are probably/most likely in a matrix-like reality" is a biased opinion.

But then again this too is just my subjective reality using rule of thumb while trying to perceive objective world...I might have very well failed at some point during my thought processes.

So what makes humans think this way? Is it paired with rising sentience (a necessary component to drive intelligence evolution) or is it an evolutionary accident?

If it's an accident, then you have a really good point. If it's paired with rising sentience, then it's very reasonable to believe that any beings sufficiently advanced would begin developing simulations. At that point, probability takes over... if any beings are running simulations, there is much higher probability that we are in one of the countless simulations than there is that we are outside of it. Even with quantum computing, how many simulations can be run in parallel? I don't know the limits to that.

We don't have enough data-points on sentient species to know whether this human way of thinking drives sentience or is an evolutionary accident.

Yes gotcha.

I very much believe that it is what you call an "evolutionary accident". I wouldn’t probably call it accident, but rather like one of the possible evolutionary branches of life forms if you know what i mean. (The outcome though regardless of the definition is still the same).

What would make you think that our sentience is actually rising that much? Sure we have plenty of new technology, we have bested (most probably) all of the other life forms on Earth, but the vast majority of the world still lives in "medieval ages". From my perspective humanity is still in its prenatal stage (and will probably destroy itself before it ever manages to climb out from that stage). Thus said thinking that from the perspective of Truth/Reality we can’t possibly know whether we have even matched the "average sentience" of all the living species, let alone proclaim that we have reached a state worthy of recognition, or possibly a state that "all sentient beings are bound to seek at one point". Maybe we are well below average of "sentience" and that is why we are reaching those conclusions?

Overall you added some great points. But then again the argumentation of the "rising sentience" you have shared is still based on a premise that this is the only way where life forms can lead (based on our very limited understanding of the world a life itself). We have no idea how other life forms think or act. All we can do is to assume that life forms based on CO2 (did I say it right?:D No chemist here:D) will always do what humans incline to do. As you very well pointed out

We don't have enough data...

We can only assume and I was raised in a philosophical communities that were sceptic in its nature.

Very good points. I wasn't trying to make assumptions about our relative sentience, only to point out the key assumptions that the argument takes for granted.
I don't think that we're in a simulation, but whether certain modes of thought are inherent or accidental definitely influences the probabilities.
Since we have so few data points, we can't possibly conjecture. And Elon musk didn't even calculate the actual path of a space roadster correctly. Why would we credit his simulation probabilities?

Human or non-human, if we know humans would do it, and we assume humans survive long enough to do it... stands to reason there’s a good chance they would.

But can we actually do it? I think it would require quantum computing, at the least, and it's one of the least interesting simulations to utilize the quantum computers.

Great point! It’s almost like there’s something about consciousness which requires purposes becuase conscious beings have the ability to end themselves. With that key, it makes sense for the genes to select for those who have belief outside of themselves to continue procreating and spreading.

I’m reminded of the octopus when thinking about different forms of consciousness. It’s almaot an alien life form. How much more different could other truly alien consciousnesses be?

I do think we have a bias, but as you said, I also think the arguments do make sense. Since humans probably will create simulations of other humans, then the existence of other forms of consciousness become largely irrelevant, as long as humans continue surviving long enough to pull it off.

Yes totally agreed. The gene has to "know" that we need some extra pinch of motivation to not end our lives - otherwise we would be massively doing it.

Interesting! My best friend is a biologist so I’m gonna ask him about octopuses, cuz I can’t really follow your though processes with my limited knowledge here:D.

Anyway you are right that if humans ever reach a state where they’ll be able to create a matrix like reality they will do it. And maybe it has already happened:P. It would be a strong argument against the "there are no ancient relics that would indicate that there has been a developed human society before ours". Why would they program ancient relics into the matrix if they just wanted to test us right:)? Thus said I would change the Elon’s statement to something like (if I wanted to fully agree with it.
"When humans reach a state where they can pull off the creation of matrix-like, they would do it and it has possibly already happened.”

Definitely enjoyed the video you shared at the end. Made everything feel a bit more digestible -- Simulation theory is a pretty heavy idea. I don't know if I buy it. Any time I see something that suggests that "right now is likely the most important time in human history", I tend to have some warning bells go off.

That being said -- it's hard to ignore the leaps and bounds that are being achieved in artificial intelligence, machine learning, and the way we interface with computer systems. I feel like it's definitely a more tangible theory than many religions tend to offer, but I don't personally find it that much more compelling -- due to the unfalsifiability (which may say more about our inability to test and prod, rather than the theory itself -- but who knows).

To the question you pose -- I think that on many levels, studying morality can improve anyones life, and it's a great idea. In terms of studying AI, I think it's less obvious in how it can improve ones life, but in my line of work (civil engineer) I feel like it's almost certainly going to start affecting the work that I and others in my field do, in terms of project designs -- and getting ahead of the curve on this one would do leaps and bounds to improve my life. While the whole "deep-fakes" thing sweeping the internet might not be the greatest example to draw from, it's pretty illustrative in the quantity and quality of work that can be achieved through relatively simple machine learning processes -- and I would imagine that we're going to see this technology explode into just about every industry in the next 5~10years.

Thanks for sharing -- definitely a lot to think about, and it got me going.

I love to hear my ramblings get people thinking. :)

The best explanation I can come up with for peoples' lived experiences is how the sub conscious mind (System 1, as Daniel Kahneman calls it) does "work" in the background and may impact how System 2 changes our conscious thoughts and actions which directly impacts our lived experiences to obtain what we want.

This is the root of it. Soros calls that interplay between action and perception and reaction, reflexivity.

Essentially what it comes down to is the fact that the placebo effect is real, even though placebos are fake. Your perceptions, right or wrong, will impact your choices and your actions. This is how what you believe about your life becomes your actual life.

Well said. I think some day we’ll understand this better and there will be more attention paid to what inputs we allow and which we filter out.

This might be the moment in history some advanced civilization on the brink of releasing super intelligence is simulating over and over again to make sure they don't screw it up.

If they're simulating us to make sure they don't screw up, they must have programmed us and our world to resemble them and their world as closely as possible. So, for all intents and purposes, they are us in the future. And they've already survived this moment, and progressed to the point of being able to put consciousness into a machine, but they aren't yet ready to allow it to develop into a super AI. In other words, they have put limitations on our mental capacity. Either that, or this is as smart as the AI gets.

How far away is that future in which we've developed the ability to transfer consciousness to machines, yet hold back on allowing it to become smarter than we are?

If it's far off, why simulate this moment in time? Wouldn't it make more sense to simulate a time as close as possible to the current state of that world?

And wouldn't it be immoral to put consciousness into the simulation, knowing that it will cause needless suffering in the event that they screw up the release of the super AI within the simulation?

Just for fun, let's imagine for a moment we are in this simulation and the value function for this simulation has been set to something like "Reward those who figure out how we're going to program morality into the super intelligent systems we're building."

Maybe they're not like us at all. Maybe our world is very different from theirs, and we were created for the sole purpose of figuring out this morality in machines thing.

But we have morality within this machine. It's just that not everyone adheres to it. And not everyone follows the same moral code. So maybe the purpose of the simulation is to explore myriad moral codes until one is found that is readily accepted by any and all consciousness within the machine, thus giving reasonable assurance that it will work in the real world.

Or, maybe the goal is to come up with a system that incentivizes moral and mutually beneficial behavior. So, stuff like Steem. Or EOS.

So maybe Dan Larimer is god.

And maybe EOS stands for End Of Simulation.

End of Simulation. Heheh. Nice.

I think for a system to be effective for prediction, it would have to evolve organically over “time” like other machine learning systems.

As to morality, do we consider the suffering of non-sentient characters we create in our existing video games? Or maybe we would justify the suffering of some virtual humans to save some real ones from a utilitarian perspective? Reminds me of the book The Age of EM which was interesting.

Agreed. But I just realized, if you want the system to to work in a post simulation environment, don't you have to include a certain period of time after the release of the super AI as well? Are we living post AI already? Is it possible the AI came to the conclusion that the most moral thing to do is absolutely nothing? Or is it waiting silently for all of the pieces to fall into place before it suddenly takes over?

I don't play games much, but I was playing this one on one combat game at a friend's house a few years ago on his Playstation. Can't remember the name, but you can create your own characters. So we'd make characters, and then if they were defeated, we'd delete them to simulate "death." Made the game a lot more interesting, because a lot of "work" went into building the characters. But I sure hope there was no actual suffering involved. I would hate to think that characters in games were suffering. That's not the kind of panpsychism I can deal with.

As far as the justification of the suffering of virtual humans to save real ones, I'm not sure if conscious experience is necessary for that or not. If rational materialism is right, it's not necessary. If rational materialism is wrong, and conscious experience isn't just some freak anomaly that has no influence on outcome, then you're throwing true randomness into the mix, meaning the simulation can never truly predict reality.

There's also the problem of a malevolent AI becoming aware that it's just a simulation, and hiding it's intentions accordingly, until it is released in the real world.

Cool argument, it's essentially a converse Roko's Basilisk, right?

I hadn’t thought of it that way, but that’s interesting. I was more thinking about humans running simulations to better understand how they should create A.I. But if the A.I. take over the simulation, that brings us back to an inverse Roko’s Basilisk.

I would highly recommend reading the Hyperion cantos (four book series) by Dan Simmons. It is, in my opinion, much better than The Matrix and deeply explores the themes of AI, morality, spirituality and religion in a hyper-technological world. It is a work of art, and I'm sure you'll enjoy it.

Sounds great! Just bought the first one on Audible. I was about ready for some fiction. Thanks! I was surprised to see it written so long ago. Very curious to see how it stood up over time.

The author pretty much predicted the inevitable success and worldwide adoption of the internet. Things get really philosophical in the second two books, Endymion and The Rise of Endymion. I'm just about to re-read the series. I hope you enjoy it just as much as I did.

I'm sorry, I'm on team Einstein, I opt for reality being the current environment. That's a choice I make, because it aligns perfectly with my (albeit imperfect) perception.

Second, I think AI won't achieve consciousness anytime soon (which means no problem for me, not for my kids and probably not for my future grandchildren). I truly believe humans are more complex than we realize.

Seems the AI experts disagree, as I mentioned in the first post:

Do you have reason to think the experts are wrong?

As for your perceptions, if you understand Bostrom's argument, sufficiently advanced simulations will, eventually, be indistinguishable from reality. Even with the VR we have now, I think that claim isn't all that far off, assuming to don't change the progress line we're currently on. That means our perceptions could be fooling us. How would we know?

Dear @lukestokes, I was assuming me, my children and my grandchildren would live to about max 90 - 100 years. My point was about consciensceness, a very philosophical subject and probably the most ill-understood part of being human. This is from the paper:

It seems the experts and I are actually on the same page. From the chart you show here-bove all the experts that think we can never grasp human level intellect good enough to provide machines the tools (algorithmic representations) to simulate it well. Those 'nevers' have been filtered out from the chart you show, which is logic, since 'never' is quite difficult to calculate into a median. The point is, those 'nevers' are actually quite a big portion of the respondents answers and for good reasons.

Ah, thanks for clarifying. I often skip right over the word “consciousness” as it’s quite loaded. I see it as some combination of memory, arrousal, and awareness. It may not be as special as we’d like to believe as there are many levels of consciousness throughout the species on this planet.

As to the <20% who say never, you’re right, I shouldn’t skip over those views so quickly. Maybe we won’t ever get there, but having worked with computers since 1996 and having been exposed to neural networks in college, it seems quite plausible to me, so I align with the >80%. From there, creating true-to-life simulations seems inevitable.

There's a theory that, we, the average person are only using 3 percent of our brain, if that proven to be true, imagine what a person potential could be if we're capable of using 100 percent. So far, all i know is that we still standing on the back of a giant turtle. :)

That has been discredited, as far as I’ve seen. Search it up.

If only EVERYONE would study morality...

Look at how we behave as a community on Steem, we still have a very long road ahead of us.

Currently, I think much of that is ridiculous.

All of that is :P If 'naming and claiming' were true, natural selection would've figured it out by now, and we'd be hardwired to do it. NS figured out much more complicated things, I think the Secret isn't beyond her capabilities; we definitely wouldn't need a book to tell us that it's true.

As for simulations, I guess anything could be the case. A very faithful equivalent of Christianity could be running, in that there could be 2 levels (life and afterlife), and the value could be "don't reward people who do X, to see if they'll still do X despite the lack of reward, and if they keep at it to the end, reward them with a good place in level 2 (afterlife), where they'll be allowed to do the work to their heart's content". Or something.

I wasn't aware of that youtube channel, thanks for sharing it, I've subscribed!