You are viewing a single comment's thread from:

RE: Solving Mesh Routing Given Bad Actors

in #meshnetwork6 years ago

Check out a quote from this article:

what does a scalable solution actually mean? Does this mean that the solution is scalable in the number of users or in the number of transactions, or in the size of the network? If a P2P network is capable of processing thousands of transactions, can we call the solution scalable? If so, what happens when the network doubles its size — — can the throughput be maintained? In fact, a solution that is scalable in a single dimension may not be well-suited for a use case that requires scaling in a different dimension. Hashgraph currently scales only in the number of transactions processed but does not scale with the number of nodes in the network. Zilliqa for instance scales with the number of nodes in the network.

I find the Hashgraph guys to be selling it very hard and understating the downsides, almost defensive about them. This is a good example of some confusion. So the answer is, it depends on the throughput, and that depends on the use case. If there aren't that many packages (messages) but there are tons of people (nodes) then it might be applicable.

Sort:  

"Hashgraph currently scales only in the number of transactions processed but does not scale with the number of nodes in the network."

From the article you quoted.

" If there aren't that many packages (messages) but there are tons of people (nodes) then it might be applicable."

From your reply.

Unless I am even more confused and stupid than I think I am, I find those two statements contradictory. That being said, I was unaware of the scalability issue with Hashgraph, and don't understand it. I'm not that surprised really, as I am not a coder, and don't expect to be able to follow deep into the nuts and bolts.

If Hashgraph doesn't scale nodewise, then I'm saddened, but glad you pointed it out. Checking out Zilliqua now, in the hope that scalability in all three dimensions either turns up, or can be cobbled together soon.

Thanks!

I see what you mean, good catch.

In the context in the article the author is talking about taking the dimensions of use independently, of number of messages, payload size (together which combine to throughput) and number of nodes. I thought it was that for the same number of nodes, hashgraph can scale in throughput, and for the same amount of throughput, it can scale in number of nodes (that is, holding each variable and scaling the other) but perhaps my interpretation was incorrect. Thanks for that, now I'm not sure.

I'll need to do more reading actually because there's not quite enough here. I'll get back to you on that.