Nester

How to make a community online behave correctly while competing against each other?

Roles
User Experience, UI, Concept Development, Frontend Development
Year
January-May 2016
Harvard University
Tag
Interaction Design
Service Design

The project presented in this page is the output of the Data-Shack Program I took part at Harvard University in 2016, then further developed and published as my Master Degree thesis, with the title “Competitive Crowdsourcing”. Nester, the name of the project, is a design competition platform based on crowdsourcing. In a few words, its aim is to bypass the role of a specific jury to evaluate the projects, by letting the community choose the winner through the participants’ votes.

Logo and Declination

nester logo

The development of the identity and the UI was as well part of the project. The name "Nester" represents metaphorically a safe space where projects and ideas can grow.

nester mockup

Typography

Din Pro

Style Guide (scaled)

style guide

Why Crowdsourcing?

This specific application of crowdsourcing to design contests started on how to improve the field of online competition, an area often ignored by design professionals due to its lack of attractiveness. Crowdsourcing the jury therefore is an exploration on how to create a better experience that doesn’t end with the conclusion of the competition, but it offers the possibility of a long- term investment and reward for the user.

How to avoid people cheating?

In the term of the constitution of a long-term investment and to achieve a fair voting process, the rating and reputation system become a crucial part in the development of the platform. Reputation systems can be defined as a measurement of how much the contents provided by the single users is valuable for the community. This can be represented by a fine-tuned value (like StackExchange system) to more simple mechanism such as the amount of followers in a social network. In both cases that value reflects a level of trustability of the contents, and it can be built investing time in the community.

nester-competition

Reputation System

In Nester it works in a similar way: when a user rates a proposal he will obtain a certain amount of Reputation Points (RP) based on how its rating aligned with the project’s average vote. In a practical term, if a user assigns a low vote (let’s say 1 star out of 5) to a project that has a rating of 4.4/5, it either means that the user is trying to sabotage the competition, or that it simply couldn’t recognise the actual value of the project. Therefore it will be assigned a low quantity of Reputation Points or, if the difference from the user’s vote and the average one is quite high as in this case, the amount of RP is going to be even negative. At the same time, this prevents people voting 5/5 stars every projects just to see their RP increase.

reputation system

Visual representation of the algorithm defining how the reputation system works. It shows the two process for the user to improve its reputation score.

nester-signup
nester-discovery
RP formula

With my reputation score is associated with benefits for my profile. In fact, the more RPs I have, the more times my projects will be displayed in the Discover Page (where people vote projects). This doesn’t mean that I will automatically win, but guarantees an increment in visibility, therefore votes. Designers this way are encouraged to vote correctly, which means not to assign high votes just to get a reward, but to properly evaluate a project.

Tests

The platform has been tested both in a usability and validation test. This last one in particular is very important in this case to determine whether the Reputation Score works. Therefore it has been developed through Firebase an A/B test consisting in two different web pages, one with the introduction of the RP, and one without.

tools

A. Hidden RP

test_1
test_2

B. Visible RP

The results were particularly disappointing at first. During the test I tried not to influence the users in the voting process by leaving them alone, hoping they wouldn’t feel “observed”. Nevertheless the A/B test displayed pretty much identical results between the two platforms… Not what I was hoping for after several months of development. However I felt that even if I tried to be neutral during the test, the natural competitively that would emerge in a real life scenario had been compromised by the situation of “being in a test”. Therefore I decided to edit the code to collect data via the IP addresses, in order to create a sort of profile for each user (therefore without them being aware of their vote being tracked), and replicated a second A/B test, this time via web instead of being in-person. This time the results were particularly interesting: the same projects (in the two different platforms) got on average lower votes (often with a difference higher than 2 points) in the version where the RP was hidden, compared with the version where the RP was visible to the users.

What does it mean?

Interpreting this data isn’t easy. The desired competition scenario is very difficult to chase if not developing the whole platform and launching it into the real world, with real competitions. Therefore even if we can't consider these data as an absolute truth, the system showed its potentiality and very significant results, in a situation where the users were still aware to be part of a test.

This page shows the project that has been developed in 9 months, first in the data-shack, and subsequentially as my Master Degree's thesis. Several information and parts of the project have not been included due to the extension of the research (including UX archetypes, personas, etc.). If you have question or there are parts that are not clear, feel free to contact me.