References | REFERENCES 1. Facebook ad delivery and pacing algorithm. http://bit.ly/20svn45, 2016. 2. Turk alert. http://www.turkalert.com/, 2016. 3. Adamic, L.A., Lauterbach, D., Teng, C., and Ackerman, M.S. Rating friends without making enemies. In Proceedings of the International Conference on Weblogs and Social Media (ICWSM’11). 2011. 4. Akerlof, G.A. The market for lemons: Quality uncertainty and the market mechanism. Quarterly Journal of Economics, pp. 488–500, 1970. 5. Antin, J. and Shaw, A. Social desirability bias and self-reports of motivation: a study of amazon mechanical turk in the us and india. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 2925–2934. ACM, 2012. 6. Aroyo, L. and Welty, C. Crowd truth: Harnessing disagreement in crowdsourcing a relation extraction gold standard. International ACM Web Science Conference, 2013. 7. Bederson, B.B. and Quinn, A.J. Web workers unite! addressing challenges of online laborers. In CHI’11 Extended Abstracts on Human Factors in Computing Systems, pp. 97–106. ACM, 2011. 8. Bernstein, M.S., et al. Direct answers for search queries in the long tail. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 237–246. ACM, 2012. 9. Brawley, A.M. and Pury, C.L. Work experiences on mturk: Job satisfaction, turnover, and information sharing. Computers in Human Behavior, 54:531–546, 2016. 10. Buchanan, J.M. Externality. Economica, 29(116):371–384, 1962. 11. Callison-Burch, C. Crowd-workers: Aggregating information across turkers to help them find higher paying work. In Proceedings of the Second AAAI Conference on Human Computation and Crowdsourcing, pp. 8–9. AAAI, 2014. 12. Cheng, J., Teevan, J., and Bernstein, M.S. Measuring crowdsourcing effort with error-time curves. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 205–221. ACM, 2015. 13. Chilton, L.B., Horton, J.J., Miller, R.C., and Azenkot, S. Task search in a human computation market. In Proceedings of the ACM SIGKDD workshop on human computation, pp. 1–9. ACM, 2010. 14. Christoforaki, M. and Ipeirotis, P. Step: A scalable testing and evaluation platform. In Second AAAI Conference on Human Computation and Crowdsourcing. 2014. 15. Edelman, B., Ostrovsky, M., and Schwarz, M. Internet advertising and the generalized second price auction: Selling billions of dollars worth of keywords. Tech. rep., National Bureau of Economic Research, 2005. 16. Gaikwad, S., et al. Daemo: A self-governed crowdsourcing marketplace. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology, pp. 101–102. ACM, 2015. 17. Gupta, A., Thies, W., Cutrell, E., and Balakrishnan, R. mclerk: enabling mobile crowdsourcing in developing regions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1843–1852. ACM, 2012. 18. Hancock, J.T., Toma, C., and Ellison, N. The truth about lying in online dating profiles. In Proceedings of the SIGCHI conference on Human factors in computing systems, pp. 449–452. ACM, 2007. 19. Heimerl, K., et al. Communitysourcing: engaging local crowds to perform expert work via physical kiosks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 1539–1548. ACM, 2012. 20. Hinds, P.J. The curse of expertise: The effects of expertise and debiasing methods on prediction of novice performance. Journal of Experimental Psychology: Applied, 5(2):205, 1999. 21. Horton, J.J. and Golden, J.M. Reputation inflation: Evidence from an online labor market. 2015. 22. Ipeirotis, P.G. Analyzing the amazon mechanical turk marketplace. In XRDS: Crossroads, The ACM Magazine for Students - Comp-YOU-Ter, pp. 16–21. ACM, 2010. 23. Ipeirotis, P.G., Provost, F., and Wang, J. Quality management on amazon mechanical turk. In Proceedings of the ACM SIGKDD workshop on human computation, pp. 64–67. ACM, 2010. 24. Irani, L.C. and Silberman, M. Turkopticon: Interrupting worker invisibility in amazon mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 611–620. ACM, 2013. 25. Jain, S. and Parkes, D.C. A game-theoretic analysis of games with a purpose. In Internet and Network Economics, pp. 342–350. Springer, 2008. 26. Jeffrey M. Rzeszotarski, A.K. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pp. 13–22. ACM, 2011. 27. Jurca, R. and Faltings, B. Collusion-resistant, incentive-compatible feedback payments. In Proceedings of the 8th ACM conference on Electronic commerce, pp. 200–209. ACM, 2007. 28. Kamar, E. and Horvitz, E. Incentives for truthful reporting in crowdsourcing. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 3, pp. 1329–1330. International Foundation for Autonomous Agents and Multiagent Systems, 2012. 29. Kittur, A., Chi, E.H., and Suh, B. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 453–456. ACM, 2008. 30. Kittur, A., Smus, B., Khamkar, S., and Kraut, R.E. Crowdforge: Crowdsourcing complex work. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pp. 43–52. ACM, 2011. 31. Kittur, A., et al. The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work, pp. 1301–1318. ACM, 2013. 32. Law, E. and Ahn, L.v. Human computation. Synthesis Lectures on Artificial Intelligence and Machine Learning, 5(3):1–121, 2011. 33. Martin, D., Hanrahan, B.V., O’Neill, J., and Gupta, N. Being a turker. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing, pp. 224–235. ACM, 2014. 34. Mason, W. and Suri, S. Conducting behavioral research on amazon’s mechanical turk. Behavior research methods, 44(1):1–23, 2012. 35. Mason, W. and Watts, D.J. Financial incentives and the performance of crowds. In Proceedings of the ACM SIGKDD Workshop on Human Computation, pp. 77–85. ACM, 2009. 36. Mitra, T., Hutto, C., and Gilbert, E. Comparing personand process-centric strategies for obtaining quality data on amazon mechanical turk. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, CHI ’15, pp. 1345–1354. ACM, New York, NY, USA, 2015. 37. Narula, P., et al. Mobileworks: A mobile crowdsourcing platform for workers at the bottom of the pyramid. In Proceedings of the 11th AAAI Conference on Human Computation, AAAIWS’11-11, pp. 121–123. AAAI Press, 2011. 38. Navarro, D. Some reflections on trying to be ethical on mechanical turk. EPC Online Experiments Workshop, The University of Adelaide, http://bit.ly/2aRQBFZ, 2015. 39. Nisan, N., Roughgarden, T., Tardos, E., and Vazirani, V.V. Algorithmic game theory, vol. 1. Cambridge University Press Cambridge, 2007. 40. Papaioannou, T.G. and Stamoulis, G.D. An incentives’ mechanism promoting truthful feedback in peer-to-peer systems. In Cluster Computing and the Grid, 2005. CCGrid 2005. IEEE International Symposium on, vol. 1, pp. 275–283. IEEE, 2005. 41. Pickard, G., et al. Time-critical social mobilization. Science, 334(6055):509–512, 2011. 42. Resnick, P. and Zeckhauser, R. Trust among strangers in internet transactions: Empirical analysis of ebay’s reputation system. The Economics of the Internet and E-commerce, 11(2):23–25, 2002. 43. Resnick, P., et al. Grouplens: an open architecture for collaborative filtering of netnews. In Proceedings of the 1994 ACM conference on Computer supported cooperative work, pp. 175–186. ACM, 1994. 44. Retelny, D., et al. Expert crowdsourcing with flash teams. In Proceedings of the 27th annual ACM symposium on User interface software and technology, pp. 75–85. ACM, 2014. 45. Rittel, H.W. and Webber, M.M. Dilemmas in a general theory of planning. Policy sciences, 4(2):155–169, 1973. 46. Salehi, N., Irani, L.C., and Bernstein, M.S. We are dynamo: Overcoming stalling and friction in collective action for crowd workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, pp. 1621–1630. ACM, 2015. 47. Shaw, A.D., Horton, J.J., and Chen, D.L. Designing incentives for inexpert human raters. In Proceedings of the ACM 2011 conference on Computer supported cooperative work, pp. 275–284. ACM, 2011. 48. Sheng, V.S., Provost, F., and Ipeirotis, P.G. Get another label? improving data quality and data mining using multiple, noisy labelers. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pp. 614–622. ACM, 2008. 49. Silberman, M., Ross, J., Irani, L., and Tomlinson, B. Sellers’ problems in human computation markets. In Proceedings of the ACM SIGKDD workshop on Human Computation, pp. 18–21. ACM, 2010. 50. Singer, Y. and Mittal, M. Pricing mechanisms for crowdsourcing markets. In Proceedings of the 22nd international conference on World Wide Web, pp. 1157–1166. ACM, 2013. 51. Singla, A. and Krause, A. Truthful incentives in crowdsourcing tasks using regret minimization mechanisms. In Proceedings of the 22nd international conference on World Wide Web, pp. 1167–1178. International World Wide Web Conferences Steering Committee, 2013. 52. Stefanovitch, N., Alshamsi, A., Cebrian, M., and Rahwan, I. Error and attack tolerance of collective problem solving: The darpa shredder challenge. EPJ Data Science, 3(1):1–27, 2014. 53. Teng, C., Lauterbach, D., and Adamic, L.A. I rate you. you rate me. should we do so publicly? In WOSN. 2010. 54. Vaish, R., et al. Twitch crowdsourcing: Crowd contributions in short bursts of time. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’14, pp. 3645–3654. ACM, New York, NY, USA, 2014. 55. Von Ahn, L. and Dabbish, L. Designing games with a purpose. Communications of the ACM, 51(8):58–67, 2008. 56. Zhang, Y. and Van der Schaar, M. Reputation-based incentive protocols in crowdsourcing applications. In INFOCOM, 2012 Proceedings IEEE, pp. 2140–2148. IEEE, 2012. |
---|