Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms

Book chapter


Whiting, M . E., Gamage, D., Gaikwad, S. S., Gilbee, A., Goyal, S., Ballav, A., Majeti, D., Chhibber, N., Richmond-Fuller, A., Vargus, F., Sharma, T. S., Chandrakanthan, V., Moura, T., Salih, M. H., Kalejaiye, G. B. T., Ginzberg, A., Mullings, C. A., Dayan, Y., Milland, K., Orefice, H., Regino, J., Parsi, S., Mainali, K., Sehgal, V., Matin, S., Sinha, A., Vaish, R. and Bernstein, M. S. 2017. Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms. in: CSCW '17: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing New York Association for Computing Machinery. pp. 1902-1913
AuthorsWhiting, M . E., Gamage, D., Gaikwad, S. S., Gilbee, A., Goyal, S., Ballav, A., Majeti, D., Chhibber, N., Richmond-Fuller, A., Vargus, F., Sharma, T. S., Chandrakanthan, V., Moura, T., Salih, M. H., Kalejaiye, G. B. T., Ginzberg, A., Mullings, C. A., Dayan, Y., Milland, K., Orefice, H., Regino, J., Parsi, S., Mainali, K., Sehgal, V., Matin, S., Sinha, A., Vaish, R. and Bernstein, M. S.
Abstract

Crowd workers are distributed and decentralized. While decentralization is designed to utilize independent judgment to promote high-quality results, it paradoxically undercuts behaviors and institutions that are critical to high-quality work. Reputation is one central example: crowdsourcing systems depend on reputation scores from decentralized workers and requesters, but these scores are notoriously inflated and uninformative. In this paper, we draw inspiration from historical worker guilds (e.g., in the silk trade) to design and implement crowd guilds: centralized groups of crowd workers who collectively certify each other’s quality through double-blind peer assessment. A two-week field experiment compared crowd guilds to a traditional decentralized crowd work model. Crowd guilds produced reputation signals more strongly correlated with ground-truth worker quality than signals available on current crowd working platforms, and more accurate than in the traditional model.

KeywordsCrowdsourcing platforms; Human computation
Page range1902-1913
Year2017
Book titleCSCW '17: Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing
PublisherAssociation for Computing Machinery
Output statusPublished
File
Place of publicationNew York
ISBN9781450343350
Publication dates
Online25 Feb 2017
Publication process dates
Deposited16 Nov 2020
Digital Object Identifier (DOI)https://doi.org/10.1145/2998181.2998234
Official URLhttp://doi.org/10.1145/2998181.2998234
Related URLhttp://cscw.acm.org/2017/program/program_content/Proceedings.html
http://crowdresearch.stanford.edu/
References

1. Anderson, M. Crowdsourcing higher education: A design
proposal for distributed learning. MERLOT Journal of
Online Learning and Teaching, 7(4):576–590, 2011.
2. Billett, S. Learning in the workplace: Strategies for
effective practice. ERIC, 2001.
3. Boud, D. Sustainable assessment: rethinking assessment
for the learning society. Studies in continuing education,
22(2):151–167, 2000.
4. Boud, D. et al. Enhancing learning through
self-assessment. Routledge, 2013.
5. Callison-Burch, C. Fast, cheap, and creative: evaluating
translation quality using amazon’s mechanical turk. In
Proceedings of the 2009 Conference on Empirical
Methods in Natural Language Processing: Volume
1-Volume 1, pp. 286–295. Association for Computational
Linguistics, 2009.
6. Campbell, J.A., et al. Thousands of positive reviews:
Distributed mentoring in online fan communities. arXiv
preprint arXiv:1510.01425, 2015.
7. Cheng, J., Teevan, J., and Bernstein, M.S. Measuring
crowdsourcing effort with error-time curves. In
Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems, pp. 1365–1374.
ACM, 2015.
8. Deng, X.N. and Joshi, K. Is crowdsourcing a source of
worker empowerment or exploitation? understanding
crowd workers’ perceptions of crowdsourcing career.
2013.
9. Dontcheva, M., Morris, R.R., Brandt, J.R., and Gerber,
E.M. Combining crowdsourcing and learning to improve
engagement and performance. In Proceedings of the 32nd
annual ACM conference on Human factors in computing
systems, pp. 3379–3388. ACM, 2014.
10. Dow, S., Kulkarni, A., Klemmer, S., and Hartmann, B.
Shepherding the crowd yields better work. In Proceedings
of the ACM 2012 conference on Computer Supported
Cooperative Work, pp. 1013–1022. ACM, 2012.
11. Ducheneaut, N., Yee, N., Nickell, E., and Moore, R.J.
The life and death of online gaming communities: a look
at guilds in world of warcraft. In Proceedings of the
SIGCHI conference on Human factors in computing
systems, pp. 839–848. ACM, 2007.
12. Falchikov, N. and Goldfinch, J. Student peer assessment
in higher education: A meta-analysis comparing peer and
teacher marks. Review of educational research,
70(3):287–322, 2000.
13. Gaikwad, S., et al. Boomerang: Rebounding the
consequences of reputation feedback on crowdsourcing
platforms. In Proceedings of the 29th Annual Symposium
on User Interface Software and Technology, pp. 625–637.
ACM, 2016.
14. Gaikwad, S.N., et al. Daemo: A self-governed
crowdsourcing marketplace. In Proceedings of the 28th
Annual ACM Symposium on User Interface Software &
Technology, pp. 101–102. ACM, 2015.
15. Gilbert, E. What if we ask a different question?: social
inferences create product ratings faster. In Proceedings of
the SIGCHI Conference on Human Factors in Computing
Systems, pp. 2759–2762. ACM, 2014.
16. Gray, M.L., Suri, S., Ali, S.S., and Kulkarni, D. The
crowd is a collaborative network. Proceedings of
Computer-Supported Cooperative Work, 2016.
17. Gupta, N., Martin, D., Hanrahan, B.V., and O’Neill, J.
Turk-life in india. In Proceedings of the 18th
International Conference on Supporting Group Work, pp.
1–11. ACM, 2014.
18. Guthrie, C. On learning the research craft: Memoirs of a
journeyman researcher. Journal of Research Practice,
3(1):1, 2007.
19. Haas, D., Ansel, J., Gu, L., and Marcus, A. Argonaut:
macrotask crowdsourcing for complex data processing.
Proceedings of the VLDB Endowment, 8(12):1642–1653,
2015.
20. Hata, K., Krishna, R., Fei-Fei, L., and Bernstein, M.S. A
glimpse far into the future: Understanding long-term
crowd worker accuracy. arXiv preprint arXiv:1609.04855,
2016.
21. Horton, J. and Golden, J. Reputation inflation: Evidence
from an online labor market. Work. Pap., NYU, 2015.
22. Irani, L.C. and Silberman, M. Turkopticon: Interrupting
worker invisibility in amazon mechanical turk. In
Proceedings of the SIGCHI Conference on Human
Factors in Computing Systems, pp. 611–620. ACM, 2013.
23. Jøsang, A., Ismail, R., and Boyd, C. A survey of trust and
reputation systems for online service provision. Decision
support systems, 43(2):618–644, 2007.
24. Kang, J., Ko, I., and Ko, Y. The impact of social support
of guild members and psychological factors on flow and
game loyalty in mmorpg. In System Sciences, 2009.
HICSS’09. 42nd Hawaii International Conference on, pp.
1–9. IEEE, 2009.
25. Karoly, L.A. and Panis, C.W. The 21st century at work:
Forces shaping the future workforce and workplace in the
United States, vol. 164. Rand Corporation, 2004.
26. Kaye, L.K. and Bryce, J. Putting the fun factor into
gaming: The influence of social contexts on the
experiences of playing videogames. International Journal
of Internet Science, 7(1):24–38, 2012.
27. Kieser, A. Organizational, institutional, and societal
evolution: Medieval craft guilds and the genesis of formal
organizations. Administrative Science Quarterly, pp.
540–564, 1989.
28. Kittur, A., et al. The future of crowd work. In
Proceedings of the 2013 conference on Computer
supported cooperative work, pp. 1301–1318. ACM, 2013.
29. Kulkarni, A., et al. Mobileworks: Designing for quality in
a managed crowdsourcing architecture. Internet
Computing, IEEE, 16(5):28–35, 2012.
30. Kulkarni, C., et al. Peer and self assessment in massive
online classes. ACM Transactions on Computer-Human
Interaction (TOCHI), 20(6):33, 2013.
31. Kulkarni, C.E., Bernstein, M.S., and Klemmer, S.R.
Peerstudio: Rapid peer feedback emphasizes revision and
improves performance. In Proceedings of the Second
(2015) ACM Conference on Learning@ Scale, pp. 75–84.
ACM, 2015.
32. LaPlante, R. and Silberman, M.S. Building trust in crowd
worker forums: Worker ownership, governance, and work
outcomes. In Proceedings of WebSci16. ACM, 2016.
33. Larson, M.S. and Larson, M.S. The rise of
professionalism: A sociological analysis, vol. 233. Univ
of California Press, 1979.
34. Laubacher, R.J., Malone, T.W., et al. Flexible work
arrangements and 21st century worker’s guilds. Tech.
rep., MIT Center for Coordination Science, 1997.
35. Lee, H. and Choi, B. Knowledge management enablers,
processes, and organizational performance: An
integrative view and empirical examination. Journal of
management information systems, 20(1):179–228, 2003.
36. Leskovec, J., Huttenlocher, D., and Kleinberg, J. Signed
networks in social media. In Proceedings of the SIGCHI
conference on human factors in computing systems, pp.
1361–1370. ACM, 2010.
37. Little, G., Chilton, L.B., Goldman, M., and Miller, R.C.
Exploring iterative and parallel human computation
processes. In Proceedings of the ACM SIGKDD workshop
on human computation, pp. 68–76. ACM, 2010.
38. Little, G., Chilton, L.B., Goldman, M., and Miller, R.C.
Turkit: human computation algorithms on mechanical
turk. In Proceedings of the 23nd annual ACM symposium
on User interface software and technology, pp. 57–66.
ACM, 2010.
39. Luther, K., et al. Structuring, aggregating, and evaluating
crowdsourced design critique. In Proceedings of the 18th
ACM Conference on Computer Supported Cooperative
Work & Social Computing, pp. 473–485. ACM, 2015.
40. Malone, T.W. and Laubacher, R.J. How will work
change? elancers, empowerment, and guilds. THE
PROMISE OF GLOBAL NETWORKS, p. 119, 1999.
41. Martin, D., Hanrahan, B.V., O’Neill, J., and Gupta, N.
Being a turker. In Proceedings of the 17th ACM
conference on Computer supported cooperative work &
social computing, pp. 224–235. ACM, 2014.
42. McInnis, B., Cosley, D., Nam, C., and Leshed, G. Taking
a hit: Designing around rejection, mistrust, risk, and
workers’ experiences in amazon mechanical turk. In
Proceedings of the 2016 CHI Conference on Human
Factors in Computing Systems, pp. 2271–2282. ACM,
2016.
43. Mitra, T., Hutto, C.J., and Gilbert, E. Comparing
person-and process-centric strategies for obtaining
quality data on amazon mechanical turk. In Proceedings
of the 33rd Annual ACM Conference on Human Factors
in Computing Systems, pp. 1345–1354. ACM, 2015.
44. Mocarelli, L. Guilds reappraised: Italy in the early
modern period. International review of social history,
53(S16):159–178, 2008.
45. Ogilvie, S. Guilds, efficiency, and social capital: evidence
from german proto-industry. Economic history review, pp.
286–333, 2004.
46. Ogilvie, S. The use and abuse of trust: the deployment of
social capital by early modern guilds. Jahrbuch für
Wirtschaftsgeschichte, 1:15–52, 2005.
47. Patel, N., et al. Power to the peers: authority of source
effects for a voice-based agricultural information service
in rural india. In Proceedings of the Fifth International
Conference on Information and Communication
Technologies and Development, pp. 169–178. ACM,
2012.
48. Peng Dai, M.D. and Weld, S. Decision-theoretic control
of crowd-sourced workflows. In In the 24th AAAI
Conference on Artificial Intelligence (AAAI’10). Citeseer,
2010.
49. Pérez, L. Inventing in a world of guilds: silk fabrics in
eighteenth-century lyon. Guilds, innovation, and the
European economy, pp. 1400–1800, 2008.
50. Poor, N. What mmo communities don’t do: A
longitudinal study of guilds and character leveling, or not.
In Ninth International AAAI Conference on Web and
Social Media. 2015.
51. Rees Lewis, D., Harburg, E., Gerber, E., and Easterday,
M. Building support tools to connect novice designers
with professional coaches. In Proceedings of the 2015
ACM SIGCHI Conference on Creativity and Cognition,
pp. 43–52. ACM, 2015.
52. Renard, G. Guilds in the middle ages. 1918.
53. Rosser, G. Crafts, guilds and the negotiation of work in
the medieval town. Past & Present, (154):3–31, 1997.
54. Rzeszotarski, J.M. and Kittur, A. Instrumenting the
crowd: using implicit behavioral measures to predict task
performance. In Proceedings of the 24th annual ACM
symposium on User interface software and technology,
pp. 13–22. ACM, 2011.
55. Salehi, N., et al. We are dynamo: Overcoming stalling
and friction in collective action for crowd workers. In
Proceedings of the 33rd Annual ACM Conference on
Human Factors in Computing Systems, pp. 1621–1630.
ACM, 2015.
56. Shaw, A. and Hill, B.M. Laboratories of oligarchy? how
the iron law extends to peer production. Journal of
Communication, 64(2):215–238, 2014.
57. Sheng, V.S., Provost, F., and Ipeirotis, P.G. Get another
label? improving data quality and data mining using
multiple, noisy labelers. In Proceedings of the 14th ACM
SIGKDD international conference on Knowledge
discovery and data mining, pp. 614–622. ACM, 2008.
58. Sparrowe, R.T., Liden, R.C., Wayne, S.J., and Kraimer,
M.L. Social networks and the performance of individuals
and groups. Academy of management journal,
44(2):316–325, 2001.
59. Surowiecki, J. The wisdom of crowds. Anchor, 2005.
60. Suzuki, R., et al. Atelier: Repurposing expert
crowdsourcing tasks as micro-internships. arXiv preprint
arXiv:1602.06634, 2016.
61. Topping, K. Peer assessment between students in colleges
and universities. Review of educational Research,
68(3):249–276, 1998.
62. Ulibarri, N., et al. Research as design: Developing
creative confidence in doctoral students through design
thinking. International Journal of Doctoral Studies,
9:249–270, 2014.
63. Wenger, E. Communities of practice: A brief
introduction. 2011.
64. Yin, M., Gray, M.L., Suri, S., and Vaughan, J.W. The
communication network within the crowd. In
Proceedings of the 25th International Conference on
World Wide Web, pp. 1293–1303. International World
Wide Web Conferences Steering Committee, 2016.
65. Yu, L., André, P., Kittur, A., and Kraut, R. A comparison
of social, learning, and financial strategies on crowd
engagement and output quality. In Proceedings of the
17th ACM conference on Computer supported
cooperative work & social computing, pp. 967–978.
ACM, 2014.
66. Zhu, H., Dow, S.P., Kraut, R.E., and Kittur, A. Reviewing
versus doing: Learning and performance in crowd
assessment. In Proceedings of the 17th ACM conference
on Computer supported cooperative work & social
computing, pp. 1445–1455. ACM, 2014.

Additional information

Stanford Crowd Research Collective: daemo@cs.stanford.edu

EventThe 20th ACM Conference on Computer-Supported Cooperative Work and Social Computing (CSCW 2017)
Permalink -

https://repository.canterbury.ac.uk/item/8wqv5/crowd-guilds-worker-led-reputation-and-feedback-on-crowdsourcing-platforms

Download files

  • 102
    total views
  • 30
    total downloads
  • 1
    views this month
  • 0
    downloads this month

Export as

Related outputs

Utilization of ChatGPT in CDIO projects to enhance the literacy of international students
Manna, S., Williams, S., Richmond-Fuller, A. and Nortcliffe, A. 2024. Utilization of ChatGPT in CDIO projects to enhance the literacy of international students.
The role of use cases when adopting augmented reality into higher education pedagogy
Ward, G., Turner, S., Pitt, C., Qi, M., Richmond-Fuller, A. and Jackson, T. 2024. The role of use cases when adopting augmented reality into higher education pedagogy.
Adaptive and flexible online learning during Covid19 lockdown
Manna, S., Nortcliffe, A., Sheikholeslami, G. and Richmond-Fuller, A. 2021. Adaptive and flexible online learning during Covid19 lockdown.
Together apart: nurturing inclusive, accessible and diverse connections within the Canterbury Christ Church University (CCCU) community during COVID-19
Richmond-Fuller, A. 2020. Together apart: nurturing inclusive, accessible and diverse connections within the Canterbury Christ Church University (CCCU) community during COVID-19.
Prototype tasks: Improving crowdsourcing results through rapid, iterative task design
Gaikwad, S.S., Chhibber, N., Sehgal, V., Ballav, A., Mullings, C., Nasser, A., Richmond-Fuller, A., Gilbee, A., Gamage, D., Whiting, M., Zhou, S., Matin, S., Niranga, S., Goyal, S., Majeti, M., Srinivas, P., Ginzberg, A., Mananova, K., Ziulkoski, K., Regino, J., Sarma, S., Sinha, A., Paul, A., Diemer, C., Murag, M., Dai, W., Pandey, M., Vaish, R. and Bernstein, M. 2017. Prototype tasks: Improving crowdsourcing results through rapid, iterative task design.
The Daemo crowdsourcing marketplace
Gaikwad, S. S., Whiting, M., Gamage, D., Mullings, C. A., Majeti, D., Goyal, S., Gilbee, A., Chhibber, N., Ginzberg, A., Ballav, A., Matin, A., Richmond-Fuller, A., Sehgal, V., Sarma, T., Nasser, A., Regino, J., Zhou, S., Stolzoff, A., Mananova, K., Dhakal, D., Srinivas, P., Ziulkoski, K., Niranga, S. S., Salih, M., Sinha, A., Vaish, R. and Bernstein, M. S. 2017. The Daemo crowdsourcing marketplace. in: Lee, C.P. and Poltrock, S. (ed.) CSCW '17 Companion: Companion of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing New York Association for Computing Machinery.
Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms
Gaikwad, N.S., Morina, D., Ginzberg, A., Mullings, C., Goyal, S., Gamage, D., Diemert, C., Burton, M., Zhou, S., Whiting, M., Ziulkoski, K., Gilbee, A., Niranga, S. S., Sehgal, V., Lin, J., Kristianto, L., Richmond-Fuller, A., Regino, J., Chhibber, N., Majeti, D., Sharma, S., Mananova, K., Dhakal, D., Dai, W., Purynova, V., Sandeep, S., Chandrakanthan, V., Sarma, T., Matin, S., Nasser, A., Nistala, R., Stolzoff, A., Milland, K., Mathur, V., Vaish, R. and Bernstein, M. S. 2016. Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms. in: Rekimoto, J. and Igarashi, T. (ed.) UIST '16: Proceedings of the 29th Annual Symposium on User Interface Software and Technology New York Association for Computing Machinery. pp. 625-637