Studying pairs of people (e.g., married couples, friends, coworkers, etc) is becoming increasingly commonplace in the social and behavioral sciences. Online participant populations, such as Mechanical Turk and other online panels, can potentially serve as a rich source of dyadic participants. However, conducting dyadic research online also faces multiple challenges that need to be overcome in order to obtain high quality results. This blog post will outline some of the challenges of running dyadic studies online, as well as the ways our MTurk Toolkit can best be used to run a dyadic study, with recommendations for best practices based on our experience. Using the methods outlined in this blog, researchers have been able to successfully run numerous dyadic studies using the MTurk Toolkit.
Recent Posts
Running Dyadic Studies on Mechanical Turk using TurkPrime
Topics: mechanical turk, romantic couples studies, dyadic studies, mturk toolkit
Moving Beyond Bots: MTurk as a Source of High Quality Data
Highlights
- We collected high quality data on MTurk when using TurkPrime’s IP address and Geocode-restricting tools.
- Using a novel format for our anchoring manipulation, we found that Turkers are highly attentive, even under taxing conditions.
- After querying the TurkPrime database, we found that farmer activity has significantly decreased over the last month.
- When used the right way, researchers can be confident they are collecting quality data on MTurk.
- We are continuously monitoring and maintaining data quality on MTurk.
- Starting this month, we will be conducting monthly surveys of data quality on Mechanical Turk.
Topics: amazon mechanical turk, bots, mturk, server farms, data quality
How to Obtain an Online National Sample Stratified by Wealth
A case study from a recent JESP article
A new study appearing in the Journal of Experimental Social Psychology suggests Americans strongly believe in economic mobility because they fail to appreciate how vast wealth inequality really is. In this blog, we review the study and highlight how Prime Panels helped the author obtain a nationally stratified sample based on wealth, strengthening the study’s findings and generalizability.
Topics: prime panels, sampling, representative samples, research methods
The Universal Exclude List: A Tool for Blocking Bad Workers
By now, even casual users of MTurk have heard about recent concerns of “bots” or low quality data. We’ve written about the topic here and laid out evidence that suggests “bots” are actually foreign workers using tools to obscure their true location (here). Perhaps most importantly, we’ve created two tools to help keep these workers out of your studies. In this blog, we introduce a third tool: the Universal Exclude List.
After the Bot Scare: Understanding What’s Been Happening with Data Collection on MTurk and How to Stop it
Highlights
- Since early August, researchers have worried that “bots” are contaminating data collected on MTurk.
- We found workers who submit HITs from suspicious geolocations are using server farms to hide their true location.
- When using TurkPrime tools to block workers from server farms, we collected high quality data from MTurk workers.
- We also collected data from workers who use server farms to learn more about them.
- Our evidence suggests recent data quality problems are tied to foreign workers, not bots.
In this blog, we review recent data quality issues on Mechanical Turk and report the results of a study we conducted to investigate the problem.
Topics: bots, turkprime, server farms, high quality data, data quality
Greetings Reader,
Topics: amazon mechanical turk, online research, primepanels, turkprime
Problem
Many researchers wish to target participants from specific states or regions of the United States like the Northeast or the West. The problem they often encounter is that using MTurk's Geographic Qualification to specify a particular state is often not adequate to ensure participants actually reside in the specified state.
Topics: amazon mechanical turk, exclude workers, IP address, IP block, location, region, survey, turkprime
Dynamic Secret Completion Codes for SurveyMonkey and SurveyGizmo
What is the completion rate and dropout rate?
Dropout rate is defined as the percentage of participants who start taking a study but do not complete it. Dropout rate is sometimes referred to as attrition rate, and is the opposite of completion rate (dropout rate = 100 – completion rate). On MTurk, completion rate is defined as the number of Workers who submit a HIT divided by the number of Workers who accept the HIT. Note that, for the definition of completion rate used here, Rejected Workers are counted as completes.
Topics: amazon mechanical turk, autoapproval, completion code, dynamic, mturk, secret code, surveymonkey, turkprime
On occasion a worker may have completed a study but was unable to successfully submit the assignment for a variety of reasons (timeout, system hiccup, computer shutdown). In such cases researchers are encouraged to follow up with the workers and open a special HIT specifically for them.
Topics: Uncategorized
Demographic Consistency Over Time on Mechanical Turk
Problem
Ever wonder if workers are being honest with you when they answer a survey? Or, if you specify that your study should be taken only by Women, whether some workers take the study even though they are not women?
Topics: amazon mechanical turk, demographics, gender gap, mechanical turk, mturk, panels