On ethics in decentralized systems

in #ethics9 years ago (edited)

By Jake Brukhman (@jbrukh). I am cross-posting this article from CoinFund's Blockchain Investments Blog.

“But great power carries with it great responsibility, and great responsibility entails a large amount of anxiety.”
— Sir Hercules G. R. Robinson

Last summer Andreessen-Horowitz and Union Square Ventures announced a $1M seed round in OpenBazaar. Implicitly everyone understood that these highly respected traditional VC firms were, in effect, funding the same kind of proposition that five months earlier landed the alleged proprietor of Silk Road, Ross Ulbricht, in jail for the rest of his life. Consequently, Brad Burnham of Union Square couldn’t responsibly end his blog post on the announcement without “addressing the potential dark side of decentralized markets.” It was a nod and not an admission, but the unspoken analogy hanging in the air was obvious.

Then, less than a year later, this happened: hours after launch, illegal contraband hit the newly launched OpenBazaar decentralized marketplace — because the free market and anonymity of the platform allowed it. Concurrently, the advent of DAO technology inspired Daemon, a straight-up darknet market, to announce a decentralized crowdfunding. It is allegedly built on the DAO work of the Slock.it team, powered by Ethereum, and hosted on Tor.

Up until now, few people have brought up the fact that the blockchain community and generally anyone building decentralized applications right now are taking on a “great responsibility” for unleashing vast and unpredictable systems into the world. In my opinion, we should be much more anxious than we seem to be.

Alex Bulkin rightfully calls this the “elephant in the room”. He posits that letting loose uncensored, anonymous, free market activity on a large scale creates a massive (and unaddressed) potential for destructive unethical behavior, and I’d like to add to that analysis that it may not be entirely intentional. The idea of ethical abuse of technological systems certainly is not new. We’ve lived for many years with its many diverse forms — shady IRC chatrooms, highly questionable subreddits, BitTorrent trackers in grey areas of legality, the Deep Web, and of course Silk Road and other darknet markets. But given the rapid trajectory of growth in the blockchain space and its networks, the difference is scale.

Our investment portfolio, CoinFund, is very much philosophically invested in the principle of a free, open, and uncensored market. But should we invest in the DAO crowdfunding of a platform such as Daemon, which seems to enable, and even implicitly support, highly questionable activities? Intuitively, off the top, the answer seems to be no. Yet, without a framework or set of principles about what kind of open investments are acceptable, we are at a loss as to explain our exact reasoning.

The issue is not so much, “Where are we going?” — humans and economics are great at adapting to new technological circumstances and making them work — but rather, “How will we get there?” I would like to attempt to formulate a framework for thinking about unethical emergent behaviors that arise in decentralized systems, recognize that they are likely short-term but impactful reverberations, and to echo Alex in a call on the community to debate this issue.

The DDoS marketplace

One day, someone creates a decentralized application on a smart contract platform such as Ethereum, and this application implements a DDoS marketplace. What is a DDoS marketplace? It is a marketplace for crowdfunding DDoS attacks aimed at particular websites. The user enters a domain name and the application can estimate the number of requests per second required to make the site unreachable or unusable for a period of time. The application also provides a valuation for a fixed amount of money that the market would be willing to transfer to any network node that provably performs an HTTP request to the site. The entire Internet then proceeds to globally crowdfund the market. When the minimum raise is achieved, the application puts the value proposition to the network: if you can prove that you made a request to this domain, I will pay you — go!

The economics are clear. There is no one stopping anyone from creating such a marketplace, nor such a market, nor is such an application very susceptible to regulation or law enforcement. The risk that this application will come to be enabled and implemented is non-trivial, and every day it inches closer and closer to fruition. If you don’t want to wait until then, you can buy small, cheap DDoS attacks from private botnets today.

The decentralized version of this system is full of emergent behaviors. Few individuals would pay a lot of money to single-handedly DDoS a site. But a decentralized DDoS marketplace harnesses the democratic power that aggregation of global capital provides. And at scale, it has the potential to turn the political sentiments of an entire group into a very physical event with very real consequences for individual freedoms. Mind-bogglingly, it’s not even required that someone makes a conscious decision: I am going to implement a DDoS market. A DDoS market can simply emerge from some general configuration of free market incentives. For example, it is trivial to accidentally build a DDoS market on a prediction platform like Augur. One just needs to forget to think and to enter the question, “Will Disney’s website be down today?” as the future event. What else are we forgetting to think about?

Finally, individual nodes on the network may not even be aware that they are participating in an activity that, in aggregate, is unethical. If some valuable request for an HTTP request comes down the pipe, who is to say exactly why this is so? Chances are, the node won’t think much of it: making HTTP requests is neither illegal nor novel. And such is the trouble of emergent behaviors. Just like the event’s funding, the responsibility for the event is shared among a large group of which no member is directly responsible.

From the standpoint of the victim, a swarm of HTTP requests comes their way and disturbs their enterprise. There is downtime and interruption of service. Sure, there are ways to protect sites from DDoS attacks, but they cost money and not every possible attack vector is preventable. At the end of the day, a market can censor a victim.
At scale, the inefficiencies in these systems collapse to zero; what emerges can only be described as a kind of group telekinesis. DDoS markets are created in milliseconds in response to sentiment. The markets are funded in milliseconds because there are hierarchies of automated investors monitoring for opportunities. DDoS attacks therefore begin in milliseconds as well, with armies of nodes happily performing automated attacks and collecting payments. The emergent behavior of such a system is that the political will of groups to censor targets is carried out instantaneously. This kind of automation could make the case of Justine Sacco, whose life was supposedly ruined by the popular will of Twitter in the course of an airline flight, look like what a tricycle looks like next to an F-16 fighter jet.

What does the proliferation of DDoS marketplaces mean at scale? Does it mean that celebrity popularity contests will be incontrovertibly decided by the new and sudden power of groups to shut down their websites, to censor their data, to invade their privacy, and — ultimately — to put them out of business? (Will you be able to resist raising the proverbial digital pitchfork at that spoiled brat, Justin Bieber?) Will the uptime of the media outlets of presidential candidates become proportional to their polls and debate results? Will “state-run media” be eliminated by reverse censorship perpetuated by the populace against their oppressors?

How does the user, the platform, the application, the victim, and the system as a whole provide ethical guarantees in such a Wild West of incentives?

On ethics in decentralized networks

Google can only afford half a trillion dollars worth of computer nodes, but a global network owned by 7 billion individuals and an equal number of mobile devices can grow massively, massively larger. That is the point — scale.

When cryptocurrency is introduced into a global, decentralized computer network it swiftly unlocks an emergent behavior. That is, given the ability to transfer value programmatically, nodes may compel other nodes to do their bidding through various incentivization processes. It is important to note that “nodes” here refers to completely self-operating, globally distributed, possibly anonymous, and untrusted nodes.

Whether the nodes are operated by humans or by software is immaterial — they exhibit, on average, the same behavioral tendencies:

  • Nodes are greedy. If value can be acquired by nodes, it will be acquired by nodes: they will generally fulfill economic demand when it is viable to do so.
  • Nodes are amoral. The behavior of nodes is based on economic considerations in networks and not ethical or jurisdictional requirements.

Probably the most advanced real-world example today of the unintended consequences I am describing is the behavior of brain wallets, a Bitcoin address generation scheme which was shown to be insecure. The idea of brain wallets is that because Bitcoin addresses and private keys are hard to memorize, if one could generate a key from a mentally-stored password, convenience and security are achieved. A short time later, this happened:

According to researchers, many wallets were drained within minutes, while most were emptied within 24 hours. […] Experts identified 884 brain wallets storing 1,806 BTC (worth approximately $100,000), and determined that only 21 of them, representing 2 percent of the total, were not drained by cybercriminals.

Of course, the “cybercriminals” in question were actually pieces of automated software running on anonymous nodes — in fact, one doesn’t even need to be connected to the Internet to brute force a result. Putting your money into an insecure wallet today is like throwing it away, and that’s obvious, but consider that this system arose simply of carelessly generated economic incentives and in total contradiction to the intentions of the wallet software.

By example we have demonstrated the basic underlying amoral mechanism of cryptocurrency-enabled decentralized networks, which is worthy of a name — let’s call it the amoral bounty model. In this model, a (i) viable economic pressure exists or is created (as through a smart contract, DAO, bounty, vulnerability, or centralized service), which (ii) incentivizes a large decentralized network of nodes, as above, to (iii) rectify it through solely economic considerations.

You can set a bounty on the cure for cancer or the merciless censorship of a political website: it doesn’t matter to anonymous, amoral, economically-driven agents. The potential for abuse is obvious. To throw out just three of probably thousands of examples of how bounty models at scale can erode freedoms, raise costs, and threaten individual safety:

  • The freedom of speech and enterprise can be eroded through DDoS marketplaces, which we have covered at length.
  • Short-term security is eroded through bounties on password attacks and general security. And while, like hacking, attacks on security keep a healthy pressure on the strength of security, this of course comes at an economic price. Unlike in our world of analog hackers, anonymous nodes are not likely to expose vulnerabilities quietly and accept private bounties.
  • Personal security can be eroded through economic bounties on lives of people, also known as assassination markets. I can’t think of an obvious solution to this kind of threat at scale, except to hope that most people are fundamentally benevolent. Can you?

There is an argument to be made that these kinds of threats are merely transient states in a self-correcting system. If the economic incentive to steal shiny cryptocoins generally erodes security, then security will improve and the problem will go away. If businesses are being censored by DDoS attacks, at some point it becomes more economically viable to prevent the attacks; centralized websites will tend toward decentralization, since it is significantly harder to DDoS and censor data in a highly redundant system — like IPFS. Will people, by analogy, learn to become so personable as not to end up on the shortlist of an assassination market?

In the long-term, it will be prudent to settle on an equilibrium where most users make ethically efficient use of decentralized systems, or at least that we are ethically comfortable with our use. For now, we should be closely monitoring and reasoning about short-term damage that will arise as a result of the sudden availability of large decentralized networks.

What measures of protection are available to potential victims, end users, and innocent bystanders? How do we define, predict, and monitor emergent behaviors of globalized systems? What measures in the protocol layer or the application layer are available that, while respecting privacy and abstaining from censorship, protect legitimate and acceptable uses of decentralized networks?

These are questions we need to be asking while we build extremely powerful, scalable, and future-oriented software.

Sort:  

I upvoted you because it appears your account name matches the original content posted here:
https://blog.coinfund.io/on-ethics-in-decentralized-systems-213ad705b462#.brvx792fq

We have an ethical issue on Steem where people plagiarize content of others to get rewards. If you want to avoid getting flagged and downvoted it is critical to give proper attribution or at the very least link to other places you have posted the full content and indicate that you are the author.

Hey Dan, thanks. I am in fact the original author. I will make the changes!

Oh! Happy to see you here Jake! Can you see a potential of this thing looking on your balance for this post?

I am not clear what these "$" denominations mean, what they're actually worth, or how they transfer into user accounts. From a psychological perspective for users of the platform, I think they are compelling.

surely it could be possible to develop a component/app to search the web for duplicate content, determine if the author is the same, and penalize in the event of plagiarism, no?

I'm not a coder, so wouldn't know how to execute the idea - though I imagine it'd surely be possible to have a competent team write an app to auto-execute any time someone posted that could run a search and verify whether content was plagiarised or not... it might not be 100% automatable, though surely to the degree that if duplicate work was found, a notice could be sent to the poster's account with request for verification they are in fact the author, requiring action to verify - which could then update the steemit system database of their accounts on other sites which they might be posting duplicate content, to prevent further alarms...

I think it is interesting that you use DDOS as an example. It is the moral equivalent of an assassination market. Perhaps inadvertently, in your argument you state the following:

And such is the trouble of emergent behaviors. Just like the event’s funding, the responsibility for the event is shared among a large group of which no member is directly responsible.

Isn't this exactly what happens when people vote to pass laws. No individual is responsible for the innocent people killed on death row, or the millions of people killed in wars started by the people they voted for. No one is responsible for the taxes stolen from the population.

You see, the market is decentralized and these emergent technologies are actually countermeasures against other emergent ills created by government. So the question of which ill is worse: prohibition or selling illegal drugs? Someone killed by political processes or someone killed by an assassination market?

Perhaps the resultant assassination markets will be the markets solution to corrupt government officials? Perhaps they will save more life on the balance than the corrupt systems we have today.

So when you think of the ills these systems create, ask yourself why we need them in the first place. What ills have motivated the free market to produce this technology?

The solution to combat the ills is to legalize and deregulate activities currently outlawed. This would allow more efficient centralized solutions to emerge that can outcompete the decentralized systems that enable the negative outcomes described here.

Thanks for the response! Some comments:

Isn't this exactly what happens when people vote to pass laws. No individual is responsible for the innocent people killed on death row, or the millions of people killed in wars started by the people they voted for. No one is responsible for the taxes stolen from the population.

In some sense, yes, it is the same. However, in decentralized systems, there may also be an element of inadvertent outcomes. In lawmaking, the distribution of responsibility is intentional. The system may work or it may not, and no one will be individually blamed (in theory). In decentralized systems, with the assumption that some emergent behavior came about by accident, the distribution of responsibility is a bug. There is difficult or little recourse for big disasters at scale.

emergent technologies are actually countermeasures against other emergent ills created by government

Check out Alex Bulkin's article on this (https://blog.coinfund.io/elephant-in-the-room-ethical-blockchains-and-the-conundrum-of-governance-a11d0f9c4c56#.h42j3147z). To him, the conundrum of governance is that "a system designed to counteract power imbalances can be used to create them."

Perhaps the resultant assassination markets will be the markets solution to corrupt government officials? Perhaps they will save more life on the balance than the corrupt systems we have today.

Perhaps. But are we ready for such a system? If you look at Colombia, their government works in this way today. If a politician goes against the cartels, he is assassinated. Is this a good system? And that's exactly the point: without studying, understanding, and counteracting issues that come about with large-scale decentralization (i.e. solving all the Sybil attack vectors, understanding reputation and identity, and counteracting oligopolistic tendencies that have so far arisen in every "decentralized system" from Bitcoin to the DAO), we probably should not put a bet measured in human lives that this system works.

So when you think of the ills these systems create, ask yourself why we need them in the first place. What ills have motivated the free market to produce this technology?

There is no question in my mind that this technology is a response to a system that has stopped working in our modern technological context. I am inviting everyone to discuss the best way to transition into this system, which should be slowly and in a measured way that takes all of the side effects into account.

Some initial comments based on my current understanding.

  1. Ethical and legal are not the same and are not even always correlated.
  2. Ethical and moral are not the same but most people give it the same connotation. Ethical is usually based on some philosophical system, involving logic, reason, rationality. Moral can be religion, spirituality, or just anything people are raised with. It's a challenge to be ethical but to be moral sometimes you must only do as you are told.
  3. Among popular varieties there are consequentialist ethics or deontological ethics. You might be a consequentialist if you successfully answered the Trolley experiment question. There are also virtue ethics. We can say that people who adhere to consequentialist ethics are most concerned about outcomes rather than whether something feels right or wrong. People who follow deontological ethics are following strict rules and in most cases intend to follow deontic logic. The divine command theory is an example of the source of behavior for deontological ethics but it's not the only example. In general thou shall not lists are deontology. Virtue ethics puts a focus on the individual's character and cares about personal values.

We don't all follow the same ethics. There is no universal ethics. In addition there are other social forces such as laws, social norms, folklores, all which influence behavior without any regard to the personal ethics or what the individual might perceive internally as right and wrong.

Social norms are rules of behavior you are expected to follow and are imposed on you by a community. These are the unwritten unspoken laws of a community and while they are not written on paper they get enforced through bullying and vigilantism. In effect, the social norms are as powerful or even more powerful than the written laws and these social norms are at play in decentralized networks which means human behavior in decentralized networks are constrained by reputation.

Laws are basically norms but the difference is the laws are enforced by professional law enforcement, federal police, the official government powers. The distinction between a law and a norm is who enforces it and how it's enforced. Violating a social norm could mean being socially shunned exiled, ostracized, given unfair treatment, while violating the law could mean being put in jail. It's up to an individual to determine which is worse but in my opinion both are negative outcomes.

A conclusion I've come to is that most ethics that people have is like a navigation system to allow them to interact and get along with other people with the minimum level of friction. It allows people to formally engage, it enables etiquette, and because not everyone follows the same ethics it means learning to accommodate within reason to people who follow different systems.

Decentralized technology will only improve ethics if the ability to interact ethically is a priority to the developers of decentralized technology. In my opinion ethics should not be imposed by the developers onto the users but instead the users should be allowed to select their tribe, their ethics, their comfort zone, and announce it to the world on the blockchain with a badge of honor or honorific title, so people know exactly how to deal with them.

An example could be Dan who clearly has anarcho-capitalist leanings, who clearly cares a lot about ethics, who speaks English, who cares about liberty, who supported Ron Paul, you can pick up an ethical profile of him using decentralized technology but also allow him to be pseudo-anonymous when he wants to. So reputation and ethics are represented as tags and ratings on a blockchain based on the historical evidence which the blockchain can continuously collect.

I've spent years thinking about these sorts of questions and have written on the subject. It is my opinion that the only way to improve ethical interaction on decentralized systems or in the world in general is to use machine learning and artificial intelligence to augment, to amplify ethical decision making. An individual can be more ethical with the help of an intelligent agent which they can query as a means of transcending their own ignorance. The paper I wrote is: "Cyborgization: A Possible Solution to Errors in Human Decision Making?" which outlines the problem as a problem in human decision making. Morality or ethics would be a decision problem and it means you can design a moral or ethical search engine where an intelligent agent recommends to the individual what the best decision would be for them according to their ethics, while taking in consideration legal and social risks. There are too many laws, too many social norms, too many customs, for any rational consequence based human individual to process without help which is the point of my paper. Rational choices require knowledge and knowledge requires an ability to process information but if there is too much data, too much information, and no reasonable amount of time for any human to process it all, then you get common exploits which take advantage of the ignorance people have of the rules, and information asymmetry.

  1. https://en.wikipedia.org/wiki/Consequentialism
  2. https://en.wikipedia.org/wiki/Deontological_ethics
  3. https://en.wikipedia.org/wiki/Deontic_logic
  4. http://www.philosophybasics.com/branch_virtue_ethics.html
  5. http://sociology.about.com/od/Deviance/a/Folkways-Mores-Taboos-And-Laws.htm
  6. https://transpolitica.org/2015/07/07/cyborgization-a-possible-solution-to-errors-in-human-decision-making/
  7. https://en.wikipedia.org/wiki/Trolley_problem

Some serious thinking here! I am compelled to upvote you with my tiny powers.

Hi dana-edwards, thank you for the explanations and I appreciate your depth of knowledge on these issues. Admittedly, I am not an expert on ethics, and so maybe I could have titled the article differently. My point is simply this: as we unleash large-scale unstoppable systems onto the world, there will be unforeseen consequences. It would really serve the community well to think about them ahead of time. To be clear, I am not advocating for a litigious or oversight approach (necessarily), I am simply echoing Alex in a call for discussion.

Jbrukh my argument is basically that the technology already is unleashed. It's already being used for it's worst purposes. Drones are used to kill people in war. Various countries have cyber militias willing to unleash advanced persistent threats. All of us are potential victims of espionage. So when it comes to cyberspace the attackers already have the advantage and always had it, and the advanced persistent threat is the kind of attacker to be most concerned about because they aren't doing it for the money, they may be state funded, they may be doing it for a cause such as to help some side of a war.

What we can do is help encourage the use of certain technologies which at the time don't get put to beneficial use by regular people. Intelligent agents are not a new technology and have existed theoretically for a long time. Blockchain technology is new but any bad guy could unleash entirely new weaponized blockchains funded by or sponsored by their state.

That being said it is true that when you empower regular people you risk that some percentage will misuse the power. It is ultimately only a situation which can be solved by empowering the people who care about security in cyberspace but to have security does not in my opinion require crippling cyberspace or diminishing liberty. I believe you can have both security and liberty in cyberspace if you get the design right.

As far as intelligent agents go, with these intelligent agents many lives will be saved. The entire economy may even be saved by intelligent agents who may in fact be what powers the automated economy going into the future. It is important that any individual will be able to farm these intelligent agents, it is important that any individual will have access to AI, to automation, and while you could say there is a risk that some agents will be amoral economic agents it does not mean we have to design them to be that way.

The way I see it, your intelligent agents are the digital you. It's an expression of your will and intent, and it will act on your behalf exactly as you would want. If you're an amoral person then perhaps you would want an amoral agent but most people looking at the consequences can quickly figure out that while that approach might mean more money in the short term it is very costly long term.

People make the mistake sometimes of believing all wealth can be measured in net worth or in money. People make the mistake of believing that having money is equivalent to having power. Neither are true. To have wealth you must have resources, and in specific you must have assets. Your reputation in a community is an asset, your human capital are assets, and these assets are determined based on how other people think of you at any given time. If a person truly is trying to be wealthy in a world where there are intelligent moral agents then they would probably be the most wealthy if they have the intelligent agents which are the most moral as well as economic efficiency.

I may have some papers on this subject which I can post here to continue the discussion. It's an important discussion and the topic of intelligent agents is very important for future morality and ethics. Personally I think intelligent agents will be able to improve ethics dramatically because most human beings aren't particularly good at thinking about ethics as it pertains to hundreds of people, thousands of people, millions of people, because they can't get past Dunbar's number and other biological limits which limit people to being able to only think about the people they personally know. The intelligent agents will be able to make information based on knowledge of millions of people, possibly intimate big data type knowledge, big ethics will result from big data.

  1. Baylor, A. (2000). Beyond butlers: Intelligent agents as mentors. Journal of Educational Computing Research, 22(4), 373-382.
  2. Campbell, A., Collier, R., Dragone, M., Görgü, L., Holz, T., O’Grady, M. J., ... & Stafford, J. (2012). Facilitating ubiquitous interaction using intelligent agents. In Human-computer interaction: the agency perspective (pp. 303-326). Springer Berlin Heidelberg. http://link.springer.com/chapter/10.1007/978-3-642-25691-2_13#page-1

Now to address some concerns you specifically put forth in your post.

By nodes I assume you mean either economic agents or intelligent agents? Smart contracts for example can be intelligent agents, bots can be intelligent agents, and you can use a botnet to censor or suppress information so it's not a stretch to believe that intelligent agents can be used to promote a global moral regime which most of us disagree with.

One way to combat the nasty intelligent agents which we think could exist is to start building the friendly (friendly to our interests) intelligent agents today. First if you're familiar with extended mind theory then you'll understand that these intelligent agents will become a part of our minds. Our intentions will be encoded into them and our minds uploaded onto intelligent agents. For these reasons I would say this is about helping ethical people take control of their own mind and helping people to identify other people who share their ethics.

There is an argument to be made that these kinds of threats are merely transient states in a self-correcting system. If the economic >incentive to steal shiny cryptocoins generally erodes security, then security will improve and the problem will go away. If businesses >are being censored by DDoS attacks, at some point it becomes more economically viable to prevent the attacks; centralized websites >will tend toward decentralization, since it is significantly harder to DDoS and censor data in a highly redundant system — like IPFS. Will >people, by analogy, learn to become so personable as not to end up on the shortlist of an assassination market?

Assassination markets, decentralized ransom networks, it's all potentially going to exist. At the same time these sorts of networks already exist around the world but now with technologically improved communications. Not everyone agrees with warfare or wants warfare. Assassination markets would lead to warfare, and then there would be organized efforts to put an end to it. At this point in time, it is up to the early adopters and innovators to take on the responsibility of building protection mechanisms for future generations who might inherit a more decentralized world. For sure there are risks of terrorism, of war, but at the same time there is potential to build for peace.

Bitnation is an example of one of the first attempts at building a virtual nation. The virtual nation concept was birthed in many different places but I have independently contributed to one concept while Susanne independently came up with Bitnation. In both cases the idea is that as technology moves forward the concern of borders is going to diminish, it goes from geographical to digital. Virtual nations, tribes, communities, whatever you want to call the groups, in the end it is how people think which matters. People who think in ways compatible with how you think will be people you'll be more willing to work with, and by allowing people to voluntarily join virtual nations, or cyber tribes, or decentralized autonomous communities, you create order in cyberspace according to the rules and norms selected by individuals.

There will certainly be communities and groups who have agendas we don't like, who do things we don't like, and there will be individuals who have bad histories, and all of this will be flagged by the blockchain as well as by traditional law enforcement. The point here is to build the software necessary so like minded people can find each other and organize to solve problems.

Smart contracts, dispute resolution, reputation systems, sites like this, or concepts like what is being worked on by team Bitnation, are all about solving problems. The fact that we have these discussions means that at least some of us are giving it thought. One thing many of us early thinkers on the subject agree on is that a polystate or polycentric model is necessary for cyberspace. One order to rule them all, from the top down, is not likely going to work with the social norms we have in place and the attitudes of the people currently building the technology.

  1. https://www.amazon.com/Polystate-Thought-Experiment-Distributed-Government-ebook/dp/B00IM5EM7W

Hi Dana, some comments:

By nodes I assume you mean either economic agents or intelligent agents?

Yes, agents in the abstracts. They could be autonomous software or users acting on the network. The relevant properties of these agents are generally the same: they will amorally fulfill economic incentives.

it's not a stretch to believe that intelligent agents can be used to promote a global moral regime which most of us disagree with

I totally agree here. Just as people write software to exploit economic advantages found on a cryptocurrency-enabled network, users can also write benevolent software to counteract negative effects. For instance, if the Disney website is being DDoSed due to an ill-formulated prediction market, watchdog nodes could potentially reduce the economic incentives to do harm. It's an interesting problem with many facets, not the least of which is: it takes capital to undo harm, so where does this capital come from?

you create order in cyberspace according to the rules and norms selected by individuals.

Indeed! And for full disclosure, I am a citizen of Bitnation. I definitely think that consent-based citizenship in digital nations, DAOs, and other decentralized systems that permit membership is the way to go. To extend the national analogy, how does the system deal with large-scale decentralized behaviors between nations, which may not always align?

Smart contracts, dispute resolution, reputation systems, sites like this, or concepts like what is being worked on by team Bitnation, are all about solving problems.

Would love to learn more about the thinking here!