RE: Justin Sun on the future of AI: Genes, memes, and singularities
Why is that sort of subjective view of reality more useful than a simpler mechanistic way of controlling our perceptions and behaviors?
I haven't dug into this topic in a ton of depth myself, but to me it seems like it's making a big assumption that you can create a simpler non-conscious thing that does what conscious things do. We have no way of knowing for sure whether other things have subjective experiences or what they're like. Maybe anything that has a complex-enough internal model to be able to successfully interact with us will have the equivalent of an inner narrative. Maybe the way we humans do it is actually the simplest way.
If Sun is right that family, as a concept, will stop existing, my understanding of Dawkins would suggest that this would usher in an end to altruism, and a corresponding deactivation of evolution's ability to adapt and improve.
I don't think that's right. The reason for something arising originally doesn't have to be the reason it continues existing. Evolution frequently grabs things that are "lying around" and uses them in other systems (e.g. whale flippers based on terrestrial legs). Some of our "nice" behaviors may have arisen for kinship reasons, but if they're good and self-sustaining on their own then taking away the starter won't make them stop.
What are your thoughts about Sun's vision for the future?
I think it raises some interesting questions. I'm not sure I buy bio-technical synthesis as being that important. I think the question of whether you can ship-of-Theseus swap out a consciousness's substrate is sort of interesting from a philosophical perspective, but I'm not sure it will have much practical relevance. I think the more interesting question is whether AIs (of whatever form) can do the kinds of things that we can do that make us believe other people are as conscious as we are. I don't see any reason why that should be impossible, but it's an open question whether the current LLM paradigm is on its way there or if other elements will be necessary. And from the perspective of AIs potentially being different kinds of things, it's not obvious if "number of agents" will be an especially important number in the future -- it has historically been a good heuristic with humans because we have some intuitive sense of how multiple humans "add up", but entities that don't operate the same way may raise some interesting questions. Like, would it actually be valuable to have a country with 100 billion identical entities, or is something like the diversity of ways of thinking (that we tend to get "for free" with the way that each human tends to have unique genes and a unique experience of developing in the world) important?
I think there's certainly a possibility for catastrophe at the hands of AI expanding out of control. But there's also potentially good things. Maybe entities that are more native to a world of information won't have the same scarcity concerns that we do. Maybe they'll understand things in a different way than we do in a way that ends up being beneficial to everyone. Maybe blockchain-native entities would be naturally inclined to distributed mutually-beneficial arrangements with other entities rather than basing things on physical-world power and status (I'm kind of skeptical on that one, I think the distributed nature of blockchains is more aspirational than real with the current models, and entities that optimize themselves to thrive in the current crypto ecosystem may have some rather unpleasant traits).
Maybe. Eric S. Raymond argued a couple months ago that there's no hard problem at all, which I guess was making a similar point, though he didn't elaborate that I can find. To me, it doesn't seem like the subjective inner narrative should be strictly necessary, but that's just my own intuition.
You may be right. Still, some new reason for altruistic behavior that benefits the meme at the expense of the host would probably be needed, along with a deciding function for when it should be used.
Thanks for your feedback. You raised multiple interesting points and questions that I hadn't considered.
0.00 SBD,
0.09 STEEM,
0.09 SP