Experimental Proof that the quality of your posts does matter (some)steemCreated with Sketch.

in #steemit8 years ago (edited)

One of the more common themes on this platform is the complaint about how bad posts get up-voted and good posts do not. A quick search for “popularity contest” yielded 167 results. “Voting unfair” returned 496 results, including this one by @jako, which paid out 2 cents, and which has a video embedded that you should definitely watch before progressing any further in this post (Hums the Jeopardy theme). 

Yeah. Primates are obsessed with fairness.    

Not just humans – all the social primates. Key word there being social, meaning that the same reward changed its value in the face of social information. Monkey 1 was perfectly happy with the cucumber until it saw monkey 2 getting a grape. And undoubtedly if Monkey 1 could talk it would tell you, straight up, that the cucumber/grape payout situation was unfair and that it was angry.   

Most social psychology studies are working at a more unconscious level, looking at how we make decisions, regardless of how we might later justify those decisions. This includes things like the famous Implicit Association Test, which measures your biases for or against classes of people by how quickly you can associate positive or negative words with those people.   

The paper I’ll actually talk about today comes from a series of three from Matthew Salganik’s Music Lab at Princeton, looking at how songs become popular. The first paper -- Salganik, Dodds, and Watts (2006), in Science – was the really groundbreaking one, in my opinion. The others flesh out the method in interesting ways, which we can discuss further if readers want to, and the third one offers some good summary and context. Today I’ll just summarize the first one as it relates to Steemit.   Pop it open in a new browser window so you can see the graphs.  Got it?  OK.

What was so great about SDW 2006?  

They built an “artificial market,” where people could download free music by unknown artists, and then rate that music from 1-5 stars (screenshots in the Supplemental Materials, Figures s2-s5, which wouldn't fit into the super-short Science format). It’s a lot like Steemit, where you can read posts by people you’ve never met, and decide on how much of your voting power to use when you up-vote them.   

Here’s the difference. The Princeton guys randomly divided their sample of 2100 people into ten groups (figure S1, page 2). Two of those groups looked at the songs in random order, with no information about what any other user thought. In other words, they couldn’t see the star ratings of any of the songs. They had to decide for themselves. The other eight groups got to see the star ratings, but only within their own groups. So it was like having eight parallel worlds where they could re-release the same song eight times to see how consistently it performed. All of this led up to a second decision, which was: Do you want to download this song?    

(In these kinds of studies it’s a good idea to include both self-report and behavioral data because self-report data can be contaminated by politeness or attempting to game the system for one’s own benefit. That wasn’t the case here, as the songs were the only reward.)   

You’d expect an objectively great song to get high ratings in all ten groups, and an objectively terrible song to get low ratings in all ten groups. And that did happen, more or less. However, in the middle range, opinions became more variable even in the independent or non-social group.    But they went crazy in the social worlds.    A mediocre song could be highly popular in one and a complete failure in another. Being able to see what other people are doing creates positive feedback loops --  called amplifiers in electronics --  and tiny differences in quality, or just in their random placement in the list, could create major differences in the outcome.  (In chaos theory this is called sensitive dependence on initial conditions.)   

It gets worse. 

From the paper:   

“Our experiment is clearly unlike real cultural markets in a number of respects. For example, we expect that social influence in the real world—where marketing, product placement, critical acclaim, and media attention all play important roles—is far stronger than in our experiment.“   

In other words, in the real world there is not just one amplifier; there are a series of at least four amplifiers, each one increasing the positive feedback of all the ones that came before. On Steemit, it’s not just the number of votes a post receives. The size of the voting accounts acts as a second, much more important amplifier. The curation reward acts as a third, since once a post has a high payout, and you can see it, why not pile on to get a piece of it?   The speed-up in voting offered by automated bots serves as a fourth level of amplification.

The value of this paper is twofold. One, it demonstrates that a social system that allows for copycat behavior is going to generate highly unequal results -- structurally, not as a result of any conscious conspiracy.  Two, it shows experimentally how to make a fair system, by withholding social data.    

That’s no fun, you say. What good is a non-social social network?   

Well, maybe one day out of ten you're assigned to the non-social condition.  That might work.  

No?

OK, the other way to make a fair system is to run the system in parallel across multiple randomly seeded sub-Steemits, and to look for consistency in ranking posts across "worlds." I’m no programmer or statistician, but the paper and its supplemental materials seem to offer enough detail for someone with those skills to take a decent crack at it.    

[source]

What do you all think?

Follow me @plotbot2015.

Randall Hayes, your friendly neighborhood neuroscientist, also writes the PlotBot column for Orson Scott Card's Intergalactic Medicine Show.

Sort:  

Not sure how I feel about this, but others seem to like it, so you get my vote.

OK, you get assigned to the no-social-info group.

Excellent write-up! Gave you the whole package: view, read, vote + follow :) I am happy it got numerous votes (that was very 'fair'! :)) so I could find it in the higher section of #steemit.
I loved the monkey part!!! Aren´t we all monkeys when it comes to comparison in the end? :) I am quite aware of these discussions here and have been asked several times by new users what they should do to improve their rankings. Well, there are so many factors having an influence on the success of a post, that it´s quite hard to give any serious recommendations. What is a bad post and what is a good post? It´s much about taste in the end.
If there might be one 'rule' it´s that posts which provide some considerable value to the community usually receive relatively good rewards. Value can be a lot of different things here, but mostly knowledge + entertainment.
I don´t think we should force 'fairness' here, because when it comes to rank content steemit is like real life. Your voice is worth something, but the community decides how much. I think people should not get angry if they are not heard in the beginning. They could shout out louder, use another tone, make friends which might have a better voice than they have, be creative. In the end it´s all about gaining attention, and we have all the same possibilities.
Steem on!

I haven't seen that "view" icon before. What does that do?

It counts user views :-)

So the majority of my up-votes came from people who haven't read the post?
115 votes - 37 views = 78 unread votes?
Seriously?

Yip. Ask the bot nannies and trail brokers :)
PS: I am not a robot, haha

That sucks. I am not writing for bots, but for people.

This is one of the most interesting posts I have read in a long time on steemit! I hope you will write more about these mechanisms, and how they apply to steemit. I would love to see a system without the social information. I know others have discussed similar ideas.
Steem on, you are upvoted and followed.

They did two more papers using the same system, and I can certainly go into more detail about those in future posts if people want.