About the workshop

Crowdsourcing has become an increasingly common method for finding solutions to a variety of complex problems. Organizations are increasingly utilizing crowdsourcing technologies to seek solutions for their wicked R&D challenges on innovation intermediaries like InnoCentive, designers are looking for self-organized creative talent in flash teams (Retelny et al. 2014), businesses are distributing work to crowdworkers (Kittur et al, 2013), governments are engaging citizens in collaborative policymaking (Aitamurto and Landemore, 2015), and universities and other research institutions are involving the public to collect or improve research data (Kamar et al., 2012; Causer and Wallace, 2012). Both paid crowdsourcing, i.e. crowdwork on platforms like Mechanical Turk and oDesk, and voluntary crowdsourcing without monetary compensation, are constantly morphing to new forms and migrating to new contexts, thus developing innovation in the realm of crowdsourcing.

As a parallel development, literature about crowdsourcing has been proliferating. However, several aspects in crowdsourcing still remain understudied. There is lack of knowledge about the essence of the crowds: the crowd members’ identities, roles and the deep drivers of the crowd. We know, for instance, only a little about the drivers for participation in crowdsourced policymaking, and whether the factors are aligned with the drivers detected in other types voluntary crowdsourcing, like crisis mapping (Starbird, 2012), or those in paid crowdsourcing (c.f. Brabham, 2010). Furthermore, the immediate motivation factors often differ from those that lead to continuous engagement in crowdsourcing. The factors for continuous engagement keep the participants engaged over long periods of time and in multiple sequences in crowdsourced endeavors. In these cases, participation in crowdsourcing may become a part of the crowd member’s identity — it can develop into something larger than just fleetingly contributing a piece of knowledge or accomplishing a single microtask. Examining these “deep drivers” of the crowd would help us in understanding the crowd’s perspective and experience in crowdsourcing. It also enables us to design incentives for optimal contributions and sustainability in crowdsourcing.

Furthermore, there is lack of research about the roles that the crowd members take in crowdsourcing projects. By roles we mean both roles that the participants take in group interactions within the crowdsourcing initiative, and the technical roles afforded by the platform. We know that there are typically a small group of “superusers” in crowdsourcing efforts, but we do not know more about the subtleties in the participants’ roles, activities, and how these roles differ in changing contexts. We are interested in examining, for instance, the boundaries between newcomers and experts, highly active participants and more passive ones, moderators and mentors, the stages in which the participants grow into these roles and switch back and forth. We also want to know more about the artefacts that the crowd uses for conducting their tasks and monitoring their work individually and as a community. For instance, Fort et al.’s (2011) analysis on available worker data of Amazon Mechanical Turk suggests that real number of active Turkers is between 15,059 and 42,912; and that 80% of the tasks are carried out by the 20% most active (3,011–8,582) workers. That means that only a fraction of crowdworkers actually work very actively.

Moreover, there is little knowledge about the identity of the crowds, including the crowd members’ demographic profile. Thus, we do not really know “who” these participants are, particularly in voluntary crowdsourcing, where the crowd participates most often anonymously and no demographic data is gathered. Finally, we would like to understand the crowd’s decision making processes better. For instance, we would like to know how crowd members decide to contribute to a project and what supports their decision making. We would also like to understand the coordination and collaboration mechanisms behind the scenes that help in building these crowdsourcing systems. These mechanisms could include use of external information sources, freeware and software materials, communication tools beyond those provided by the platform, etc.

To this end, in this ECSCW workshop we seek to address questions about motivation factors, roles and identities of the participants in both voluntary and paid crowdsourcing. We invite both empirical and theoretical work, position papers and works in progress. We encourage diverse and creative methods and theoretical frameworks in examining these aspects. In this one-day workshop we will apply interactive methods in exploring the topics like small group discussions and flash presentations. The workshop will result into a research agenda, in which we, as a part of the ECSCW community, present a roadmap for addressing these questions. This deliverable which will be posted online and shared widely.

The workshop builds on three earlier successful workshops: Back to the Future of Organizational Work: Crowdsourcing Digital Work Marketplaces, Structures for Knowledge Co-creation between Organizations and the Public hosted at ACM CSCW 2014, and The Morphing Organization – Rethinking Groupwork Systems in the Era of Crowdwork hosted at ACM GROUP 2014.


References

Aitamurto, T. and Landemore, H. (2015) Five design principles for crowdsourced policymaking: Assessing the case of crowdsourced off-road traffic law in Finland. Journal of Social Media for Organizations. Vol. 2, Issue 1.

Brabham (2010) Moving the crowd at Threadless. Information, Communication & Society, 13(8), 1122–1145.

Causer, T., & Wallace, V. (2012). Building a volunteer community: results and findings from Transcribe Bentham. Digital Humanities Quarterly, 6.

Fort, K., Gilles A, K. Bretonnel C. (2011) Amazon Mechanical Turk: Gold
Mine or Coal Mine? The Association for Computational Linguistics.

Kamar, E., Hacker, S., & Horvitz, E. (2012, June). Combining human and machine intelligence in large-scale crowdsourcing. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1 (pp. 467-474). International Foundation for Autonomous Agents and Multiagent Systems.

Kittur, Aniket, et al. “The future of crowd work.” Proceedings of the 2013 conference on Computer supported cooperative work. ACM, 2013.

Retelny, D. et al. “Expert crowdsourcing with flash teams.” Proceedings of the 27th annual ACM symposium on User interface software and technology. ACM, 2014.

Starbird, K. (2012) Crowd work, crisis and convergence.: How the connected crowd organizes information during mass disruption events. Doctoral thesis. University of Colorado.

Leave a Reply