Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms

Book chapter


Gaikwad, N.S., Morina, D., Ginzberg, A., Mullings, C., Goyal, S., Gamage, D., Diemert, C., Burton, M., Zhou, S., Whiting, M., Ziulkoski, K., Gilbee, A., Niranga, S. S., Sehgal, V., Lin, J., Kristianto, L., Richmond-Fuller, A., Regino, J., Chhibber, N., Majeti, D., Sharma, S., Mananova, K., Dhakal, D., Dai, W., Purynova, V., Sandeep, S., Chandrakanthan, V., Sarma, T., Matin, S., Nasser, A., Nistala, R., Stolzoff, A., Milland, K., Mathur, V., Vaish, R. and Bernstein, M. S. 2016. Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms. in: Rekimoto, J. and Igarashi, T. (ed.) UIST '16: Proceedings of the 29th Annual Symposium on User Interface Software and Technology New York Association for Computing Machinery. pp. 625-637
AuthorsGaikwad, N.S., Morina, D., Ginzberg, A., Mullings, C., Goyal, S., Gamage, D., Diemert, C., Burton, M., Zhou, S., Whiting, M., Ziulkoski, K., Gilbee, A., Niranga, S. S., Sehgal, V., Lin, J., Kristianto, L., Richmond-Fuller, A., Regino, J., Chhibber, N., Majeti, D., Sharma, S., Mananova, K., Dhakal, D., Dai, W., Purynova, V., Sandeep, S., Chandrakanthan, V., Sarma, T., Matin, S., Nasser, A., Nistala, R., Stolzoff, A., Milland, K., Mathur, V., Vaish, R. and Bernstein, M. S.
EditorsRekimoto, J. and Igarashi, T.
Abstract

Paid crowdsourcing platforms suffer from low-quality workand unfair rejections, but paradoxically, most workers and requesters have high reputation scores. These inflated scores, which make high-quality work and workers difficult to find,stem from social pressure to avoid giving negative feedback. We introduce Boomerang, a reputation system for crowdsourcing that elicits more accurate feedback by rebounding the consequences of feedback directly back onto the person who gave it. With Boomerang, requesters find that their highly rated workers gain earliest access to their future tasks, and workers find tasks from their highly-rated requesters at the top of their task feed. Field experiments verify that Boomerang causes both workers and requesters to provide feedback that is more closely aligned with their private opinions. Inspired by a game-theoretic notion of incentive-compatibility, Boomerang opens opportunities for interaction design to incentivize honest reporting over strategic dishonesty.

KeywordsReputation systems; Crowdsourcing platforms; Human computation; Game theory
Page range625-637
Year2016
Book titleUIST '16: Proceedings of the 29th Annual Symposium on User Interface Software and Technology
PublisherAssociation for Computing Machinery
Output statusPublished
File
File Access Level
Open
Place of publicationNew York
ISBN9781450341899
Publication dates
Online16 Oct 2016
Publication process dates
Deposited16 Nov 2020
Official URLhttps://doi.org/10.1145/2984511.2984542
Related URLhttps://hci.stanford.edu/publications/2016/boomerang/boomerang-uist.pdf
https://acm-prod-cdn.literatumonline.com/2984511.2984542/f734fea8-0a93-4038-af71-58616e379467/uist2629-file4.mp4?b92b4ad1b4f274c70877518315abb28be831d54738a81f1de54388f7ef0eefe70a912b90dead24367f6dc4fb01df839b53f592f0801a213d68e8ae0bfa7b31ec099fd82d684e186b7f05ff6f251952903d9fa8fa38e7bbdd4dcaa65a7c68d24faa0732f849
https://www.youtube.com/watch?v=C6L7J2DUFz8
https://www.youtube.com/watch?v=8tX1XNq96pQ
http://st.sigchi.org/publications/toc/uist-2016.html
https://uist.acm.org/uist2016/
FunderNational Science Foundation
Stanford Cyber-Social Systems Grant
References

REFERENCES
1. Facebook ad delivery and pacing algorithm.
http://bit.ly/20svn45, 2016.
2. Turk alert. http://www.turkalert.com/, 2016.
3. Adamic, L.A., Lauterbach, D., Teng, C., and Ackerman,
M.S. Rating friends without making enemies. In
Proceedings of the International Conference on Weblogs
and Social Media (ICWSM’11). 2011.
4. Akerlof, G.A. The market for lemons: Quality
uncertainty and the market mechanism. Quarterly
Journal of Economics, pp. 488–500, 1970.
5. Antin, J. and Shaw, A. Social desirability bias and
self-reports of motivation: a study of amazon mechanical
turk in the us and india. In Proceedings of the SIGCHI
Conference on Human Factors in Computing Systems, pp.
2925–2934. ACM, 2012.
6. Aroyo, L. and Welty, C. Crowd truth: Harnessing
disagreement in crowdsourcing a relation extraction gold
standard. International ACM Web Science Conference,
2013.
7. Bederson, B.B. and Quinn, A.J. Web workers unite!
addressing challenges of online laborers. In CHI’11
Extended Abstracts on Human Factors in Computing
Systems, pp. 97–106. ACM, 2011.
8. Bernstein, M.S., et al. Direct answers for search queries
in the long tail. In Proceedings of the SIGCHI Conference
on Human Factors in Computing Systems, pp. 237–246.
ACM, 2012.
9. Brawley, A.M. and Pury, C.L. Work experiences on
mturk: Job satisfaction, turnover, and information sharing.
Computers in Human Behavior, 54:531–546, 2016.
10. Buchanan, J.M. Externality. Economica,
29(116):371–384, 1962.
11. Callison-Burch, C. Crowd-workers: Aggregating
information across turkers to help them find higher
paying work. In Proceedings of the Second AAAI
Conference on Human Computation and Crowdsourcing,
pp. 8–9. AAAI, 2014.
12. Cheng, J., Teevan, J., and Bernstein, M.S. Measuring
crowdsourcing effort with error-time curves. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 205–221. ACM, 2015.
13. Chilton, L.B., Horton, J.J., Miller, R.C., and Azenkot, S.
Task search in a human computation market. In
Proceedings of the ACM SIGKDD workshop on human
computation, pp. 1–9. ACM, 2010.
14. Christoforaki, M. and Ipeirotis, P. Step: A scalable testing
and evaluation platform. In Second AAAI Conference on
Human Computation and Crowdsourcing. 2014.
15. Edelman, B., Ostrovsky, M., and Schwarz, M. Internet
advertising and the generalized second price auction:
Selling billions of dollars worth of keywords. Tech. rep.,
National Bureau of Economic Research, 2005.
16. Gaikwad, S., et al. Daemo: A self-governed
crowdsourcing marketplace. In Proceedings of the 28th
Annual ACM Symposium on User Interface Software &
Technology, pp. 101–102. ACM, 2015.
17. Gupta, A., Thies, W., Cutrell, E., and Balakrishnan, R.
mclerk: enabling mobile crowdsourcing in developing
regions. In Proceedings of the SIGCHI Conference on
Human Factors in Computing Systems, pp. 1843–1852.
ACM, 2012.
18. Hancock, J.T., Toma, C., and Ellison, N. The truth about
lying in online dating profiles. In Proceedings of the
SIGCHI conference on Human factors in computing
systems, pp. 449–452. ACM, 2007.
19. Heimerl, K., et al. Communitysourcing: engaging local
crowds to perform expert work via physical kiosks. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 1539–1548. ACM,
2012.
20. Hinds, P.J. The curse of expertise: The effects of
expertise and debiasing methods on prediction of novice
performance. Journal of Experimental Psychology:
Applied, 5(2):205, 1999.
21. Horton, J.J. and Golden, J.M. Reputation inflation:
Evidence from an online labor market. 2015.
22. Ipeirotis, P.G. Analyzing the amazon mechanical turk
marketplace. In XRDS: Crossroads, The ACM Magazine
for Students - Comp-YOU-Ter, pp. 16–21. ACM, 2010.
23. Ipeirotis, P.G., Provost, F., and Wang, J. Quality
management on amazon mechanical turk. In Proceedings
of the ACM SIGKDD workshop on human computation,
pp. 64–67. ACM, 2010.
24. Irani, L.C. and Silberman, M. Turkopticon: Interrupting
worker invisibility in amazon mechanical turk. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 611–620. ACM, 2013.
25. Jain, S. and Parkes, D.C. A game-theoretic analysis of
games with a purpose. In Internet and Network
Economics, pp. 342–350. Springer, 2008.
26. Jeffrey M. Rzeszotarski, A.K. Instrumenting the crowd:
Using implicit behavioral measures to predict task
performance. In Proceedings of the 24th annual ACM
symposium on User interface software and technology,
pp. 13–22. ACM, 2011.
27. Jurca, R. and Faltings, B. Collusion-resistant,
incentive-compatible feedback payments. In Proceedings
of the 8th ACM conference on Electronic commerce, pp.
200–209. ACM, 2007.
28. Kamar, E. and Horvitz, E. Incentives for truthful
reporting in crowdsourcing. In Proceedings of the 11th
International Conference on Autonomous Agents and
Multiagent Systems-Volume 3, pp. 1329–1330.
International Foundation for Autonomous Agents and
Multiagent Systems, 2012.
29. Kittur, A., Chi, E.H., and Suh, B. Crowdsourcing user
studies with mechanical turk. In Proceedings of the
SIGCHI Conference on Human Factors in Computing
Systems, pp. 453–456. ACM, 2008.
30. Kittur, A., Smus, B., Khamkar, S., and Kraut, R.E.
Crowdforge: Crowdsourcing complex work. In
Proceedings of the 24th annual ACM symposium on User
interface software and technology, pp. 43–52. ACM,
2011.
31. Kittur, A., et al. The future of crowd work. In
Proceedings of the 2013 conference on Computer
supported cooperative work, pp. 1301–1318. ACM, 2013.
32. Law, E. and Ahn, L.v. Human computation. Synthesis
Lectures on Artificial Intelligence and Machine Learning,
5(3):1–121, 2011.
33. Martin, D., Hanrahan, B.V., O’Neill, J., and Gupta, N.
Being a turker. In Proceedings of the 17th ACM
conference on Computer supported cooperative work &
social computing, pp. 224–235. ACM, 2014.
34. Mason, W. and Suri, S. Conducting behavioral research
on amazon’s mechanical turk. Behavior research
methods, 44(1):1–23, 2012.
35. Mason, W. and Watts, D.J. Financial incentives and the
performance of crowds. In Proceedings of the ACM
SIGKDD Workshop on Human Computation, pp. 77–85.
ACM, 2009.
36. Mitra, T., Hutto, C., and Gilbert, E. Comparing personand process-centric strategies for obtaining quality data
on amazon mechanical turk. In Proceedings of the 33rd
Annual ACM Conference on Human Factors in
Computing Systems, CHI ’15, pp. 1345–1354. ACM,
New York, NY, USA, 2015.
37. Narula, P., et al. Mobileworks: A mobile crowdsourcing
platform for workers at the bottom of the pyramid. In
Proceedings of the 11th AAAI Conference on Human
Computation, AAAIWS’11-11, pp. 121–123. AAAI
Press, 2011.
38. Navarro, D. Some reflections on trying to be ethical on
mechanical turk. EPC Online Experiments Workshop,
The University of Adelaide, http://bit.ly/2aRQBFZ, 2015.
39. Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V.V.
Algorithmic game theory, vol. 1. Cambridge University
Press Cambridge, 2007.
40. Papaioannou, T.G. and Stamoulis, G.D. An incentives’
mechanism promoting truthful feedback in peer-to-peer
systems. In Cluster Computing and the Grid, 2005.
CCGrid 2005. IEEE International Symposium on, vol. 1,
pp. 275–283. IEEE, 2005.
41. Pickard, G., et al. Time-critical social mobilization.
Science, 334(6055):509–512, 2011.
42. Resnick, P. and Zeckhauser, R. Trust among strangers in
internet transactions: Empirical analysis of ebay’s
reputation system. The Economics of the Internet and
E-commerce, 11(2):23–25, 2002.
43. Resnick, P., et al. Grouplens: an open architecture for
collaborative filtering of netnews. In Proceedings of the
1994 ACM conference on Computer supported
cooperative work, pp. 175–186. ACM, 1994.
44. Retelny, D., et al. Expert crowdsourcing with flash teams.
In Proceedings of the 27th annual ACM symposium on
User interface software and technology, pp. 75–85. ACM,
2014.
45. Rittel, H.W. and Webber, M.M. Dilemmas in a general
theory of planning. Policy sciences, 4(2):155–169, 1973.
46. Salehi, N., Irani, L.C., and Bernstein, M.S. We are
dynamo: Overcoming stalling and friction in collective
action for crowd workers. In Proceedings of the 33rd
Annual ACM Conference on Human Factors in
Computing Systems, pp. 1621–1630. ACM, 2015.
47. Shaw, A.D., Horton, J.J., and Chen, D.L. Designing
incentives for inexpert human raters. In Proceedings of
the ACM 2011 conference on Computer supported
cooperative work, pp. 275–284. ACM, 2011.
48. Sheng, V.S., Provost, F., and Ipeirotis, P.G. Get another
label? improving data quality and data mining using
multiple, noisy labelers. In Proceedings of the 14th ACM
SIGKDD international conference on Knowledge
discovery and data mining, pp. 614–622. ACM, 2008.
49. Silberman, M., Ross, J., Irani, L., and Tomlinson, B.
Sellers’ problems in human computation markets. In
Proceedings of the ACM SIGKDD workshop on Human
Computation, pp. 18–21. ACM, 2010.
50. Singer, Y. and Mittal, M. Pricing mechanisms for
crowdsourcing markets. In Proceedings of the 22nd
international conference on World Wide Web, pp.
1157–1166. ACM, 2013.
51. Singla, A. and Krause, A. Truthful incentives in
crowdsourcing tasks using regret minimization
mechanisms. In Proceedings of the 22nd international
conference on World Wide Web, pp. 1167–1178.
International World Wide Web Conferences Steering
Committee, 2013.
52. Stefanovitch, N., Alshamsi, A., Cebrian, M., and Rahwan,
I. Error and attack tolerance of collective problem
solving: The darpa shredder challenge. EPJ Data Science,
3(1):1–27, 2014.
53. Teng, C., Lauterbach, D., and Adamic, L.A. I rate you.
you rate me. should we do so publicly? In WOSN. 2010.
54. Vaish, R., et al. Twitch crowdsourcing: Crowd
contributions in short bursts of time. In Proceedings of
the SIGCHI Conference on Human Factors in Computing
Systems, CHI ’14, pp. 3645–3654. ACM, New York, NY,
USA, 2014.
55. Von Ahn, L. and Dabbish, L. Designing games with a
purpose. Communications of the ACM, 51(8):58–67,
2008.
56. Zhang, Y. and Van der Schaar, M. Reputation-based
incentive protocols in crowdsourcing applications. In
INFOCOM, 2012 Proceedings IEEE, pp. 2140–2148.
IEEE, 2012.

Additional information

Stanford Crowd Research Collective
Stanford University
daemo@cs.stanford.edu

Permalink -

https://repository.canterbury.ac.uk/item/8wqvq/boomerang-rebounding-the-consequences-of-reputation-feedback-on-crowdsourcing-platforms

  • 112
    total views
  • 32
    total downloads
  • 1
    views this month
  • 0
    downloads this month

Export as

Related outputs

Utilization of ChatGPT in CDIO projects to enhance the literacy of international students
Manna, S., Williams, S., Richmond-Fuller, A. and Nortcliffe, A. 2024. Utilization of ChatGPT in CDIO projects to enhance the literacy of international students.
The role of use cases when adopting augmented reality into higher education pedagogy
Ward, G., Turner, S., Pitt, C., Qi, M., Richmond-Fuller, A. and Jackson, T. 2024. The role of use cases when adopting augmented reality into higher education pedagogy.
Adaptive and flexible online learning during Covid19 lockdown
Manna, S., Nortcliffe, A., Sheikholeslami, G. and Richmond-Fuller, A. 2021. Adaptive and flexible online learning during Covid19 lockdown.
Together apart: nurturing inclusive, accessible and diverse connections within the Canterbury Christ Church University (CCCU) community during COVID-19
Richmond-Fuller, A. 2020. Together apart: nurturing inclusive, accessible and diverse connections within the Canterbury Christ Church University (CCCU) community during COVID-19.
Prototype tasks: Improving crowdsourcing results through rapid, iterative task design
Gaikwad, S.S., Chhibber, N., Sehgal, V., Ballav, A., Mullings, C., Nasser, A., Richmond-Fuller, A., Gilbee, A., Gamage, D., Whiting, M., Zhou, S., Matin, S., Niranga, S., Goyal, S., Majeti, M., Srinivas, P., Ginzberg, A., Mananova, K., Ziulkoski, K., Regino, J., Sarma, S., Sinha, A., Paul, A., Diemer, C., Murag, M., Dai, W., Pandey, M., Vaish, R. and Bernstein, M. 2017. Prototype tasks: Improving crowdsourcing results through rapid, iterative task design.
Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms
Whiting, M . E., Gamage, D., Gaikwad, S. S., Gilbee, A., Goyal, S., Ballav, A., Majeti, D., Chhibber, N., Richmond-Fuller, A., Vargus, F., Sharma, T. S., Chandrakanthan, V., Moura, T., Salih, M. H., Kalejaiye, G. B. T., Ginzberg, A., Mullings, C. A., Dayan, Y., Milland, K., Orefice, H., Regino, J., Parsi, S., Mainali, K., Sehgal, V., Matin, S., Sinha, A., Vaish, R. and Bernstein, M. S. 2017. Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms. in: CSCW '17: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing New York Association for Computing Machinery. pp. 1902-1913
The Daemo crowdsourcing marketplace
Gaikwad, S. S., Whiting, M., Gamage, D., Mullings, C. A., Majeti, D., Goyal, S., Gilbee, A., Chhibber, N., Ginzberg, A., Ballav, A., Matin, A., Richmond-Fuller, A., Sehgal, V., Sarma, T., Nasser, A., Regino, J., Zhou, S., Stolzoff, A., Mananova, K., Dhakal, D., Srinivas, P., Ziulkoski, K., Niranga, S. S., Salih, M., Sinha, A., Vaish, R. and Bernstein, M. S. 2017. The Daemo crowdsourcing marketplace. in: Lee, C.P. and Poltrock, S. (ed.) CSCW '17 Companion: Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing New York Association for Computing Machinery.