Studying pairs of people (e.g., married couples, friends, coworkers, etc) is becoming increasingly commonplace in the social and behavioral sciences. Online participant populations, such as Mechanical Turk and other online panels, can potentially serve as a rich source of dyadic participants. However, conducting dyadic research online also faces multiple challenges that need to be overcome in order to obtain high quality results. This blog post will outline some of the challenges of running dyadic studies online, as well as the ways our MTurk Toolkit can best be used to run a dyadic study, with recommendations for best practices based on our experience. Using the methods outlined in this blog, researchers have been able to successfully run numerous dyadic studies using the MTurk Toolkit.
- We collected high quality data on MTurk when using TurkPrime’s IP address and Geocode-restricting tools.
- Using a novel format for our anchoring manipulation, we found that Turkers are highly attentive, even under taxing conditions.
- After querying the TurkPrime database, we found that farmer activity has significantly decreased over the last month.
- When used the right way, researchers can be confident they are collecting quality data on MTurk.
- We are continuously monitoring and maintaining data quality on MTurk.
- Starting this month, we will be conducting monthly surveys of data quality on Mechanical Turk.
A case study from a recent JESP article
A new study appearing in the Journal of Experimental Social Psychology suggests Americans strongly believe in economic mobility because they fail to appreciate how vast wealth inequality really is. In this blog, we review the study and highlight how Prime Panels helped the author obtain a nationally stratified sample based on wealth, strengthening the study’s findings and generalizability.
By now, even casual users of MTurk have heard about recent concerns of “bots” or low quality data. We’ve written about the topic here and laid out evidence that suggests “bots” are actually foreign workers using tools to obscure their true location (here). Perhaps most importantly, we’ve created two tools to help keep these workers out of your studies. In this blog, we introduce a third tool: the Universal Exclude List.
- Since early August, researchers have worried that “bots” are contaminating data collected on MTurk.
- We found workers who submit HITs from suspicious geolocations are using server farms to hide their true location.
- When using TurkPrime tools to block workers from server farms, we collected high quality data from MTurk workers.
- We also collected data from workers who use server farms to learn more about them.
- Our evidence suggests recent data quality problems are tied to foreign workers, not bots.
In this blog, we review recent data quality issues on Mechanical Turk and report the results of a study we conducted to investigate the problem.
Last week, the research community was struck with concern that “bots” were contaminating data collection on Amazon’s Mechanical Turk (MTurk). We wrote about the issue and conducted our own preliminary investigation into the problem using the TurkPrime database. In this blog, we introduce two new tools TurkPrime is launching to help researchers combat suspicious activity on MTurk and reiterate some of the important takeaways from this conversation so far.
Data quality on online platforms
When researchers collect data online, it’s natural to be concerned about data quality. Participants aren’t in the lab, so researchers can’t see who is taking their survey, what those participants are doing while answering questions, or whether participants are who they say they are. Not knowing is unsettling.
TurkPrime is announcing a change in our pricing for the MicroBatch feature. MicroBatch is now included as a Pro feature, with a fee of 2 cents + 5% per complete. This will also provide users with access to all other pro features, with no additional charge. This change is necessary so that we can continue to provide the highest quality service and tools that our users expect.
When researchers learn about conducting research online, it can sometimes be difficult to understand quite how all the tools that are available can actually be applied to a project. This post is about how specific research ideas can be carried out using the kinds of features available for online research. Online research tools can make it much simpler to recruit balanced samples of individuals that are hard to find and selectively sample using more traditional methods.
Some workers on MTurk are extremely active, and take the majority of posted HITs. This can lead to many issues, some of which are outlined in our previous post. Although MTurk has over 100,000 workers who take surveys each year, and around 25,000 who take surveys each month, you are much more likely to recruit highly active workers who take a majority of HITs. About 1,000 workers (1% of workers) take 21% of the HITs. About 10,000 workers (10% of workers) take 74% of all HITs.