CrowdScale 2013

A workshop at HCOMP 2013:The 1st AAAI Conference on Human Computation & Crowdsourcing
Call for papers · Shared Task Challenge · Important Dates
November 9, 2013

Crowdsourcing at a large scale raises a variety of open challenges:

  • How do we programmatically measure, incentivize and improve the quality of work across thousands of workers answering millions of questions daily? 
  • As the volume, diversity and complexity of crowdsourcing tasks increase, how do we scale the hiring, training and evaluation of workers?  
  • How do we design effective elastic marketplaces for more skilled work? 
  • How do we adapt models for long-term, sustained contributions rather than ephemeral participation of workers?
We believe tackling such problems will be key to taking crowdsourcing to the next level – from its uptake by early adopters today, to its future as how the world’s work gets done.

To advance the research and practice in crowdsourcing at scale, our workshop invites position papers tackling such issues of scale. In addition, we are organizing a shared task challenge regarding how to best aggregate crowd labels on large crowdsourcing datasets released by Google and CrowdFlower.

Twitter#crowdscale@CrowdAtScale

Organizers


Tatiana Josephy (@tatianajosephy), CrowdFlower

Matthew Lease (@mattlease), University of Texas at Austin

Praveen Paritosh (@heuristicity), Google


Advisory Committee


Omar Alonso, Microsoft

Ed Chi, Google

Lydia Chilton, University of Washington

Matt Cooper, oDesk

Peng Dai, Google

Benjamin Goldenberg, Yelp

David Huynh, Google

Panos Ipeirotis, Google/NYU

Chris Lintott, Zooniverse/GalaxyZoo

Greg Little, oDesk

Stuart Lynn, Zooniverse/GalaxyZoo

Stefano Mazzocchi, Google

Rajesh Patel, Microsoft

Mike Shwe, Google

Rion Snow, Twitter

Maria Stone, Microsoft

Alexander Sorokin, CrowdFlower

Jamie Taylor, Google

Tamsyn Waterhouse, Google

Patrick Philips, LinkedIn

Sanga Reddy Peerreddy, SetuServ

Related Links