It is important to consider how many highly experienced workers there are on Mechanical Turk. As discussed in previous posts, there is a population pool of active workers in the thousands, but this is far from exhaustible. A small group of workers take a very large number of HITs posted to MTurk, and these workers are very experienced and have seen measures commonly used in the social and behavioral sciences. Research has shown that when participants are repeatedly exposed to the same measures, this can have negative effects on data collection, changing the way workers perform, creating treatment effects, giving participants insight into the purpose of some studies, and in some cases impact effect sizes of experimental manipulations. This issue is referred to as non-naivete (Chandler, 2014; Chandler, 2016).
Best recruitment practices: working with issues of non-naivete on MTurk
Topics: amazon mechanical turk, approval rating, experience, exposure, HIT, mechanical turk, mturk, naivete, non-naive, primepanels, qualification, recruitment, requester, workers
The internet has the reputation of being a place where people can hide in anonymity, and present as being very different people than who they actually are. Is this a problem on Mechanical Turk? Is the self-reported information provided by Mechanical Turk workers reliable? These are important questions which have been addressed with several different methods. Researchers have examined a) consistency of responding to the same questions over time and across studies b) the validity of responses, or the degree to which the items capture responses that represent the truth from participants. It turns out that there are certain situations in which MTurk workers are likely to lie, but they are who they say they are in almost all cases.
Topics: amazon mechanical turk, anonymous, demographics, HIT, mturk, panels, qualification, turkprime panels, unique worker, worker groups, workers
Hundreds of academic papers are published each year using data collected through Mechanical Turk. Researchers have gravitated to Mechanical Turk primarily because it provides high quality data quickly and affordably. However, Mechanical Turk has strengths and weaknesses as a platform for data collection. While Mechanical Turk has revolutionized data collection, it is by no means a perfect platform. Some of the major strengths and limitations of MTurk are summarized below.
Topics: amazon mechanical turk, demographics, exclude workers, google form mechanical turk, HIT, mechanical turk, mturk, mturk api, panels, qualification, study, turkprime panels, unique worker, worker groups, workers
Greetings Reader,
Topics: amazon mechanical turk, online research, primepanels, turkprime
New Feature: Select MTurk Workers by Big Five Personality Types
Run Studies Targeting Specific Big Five Personality Types
TurkPrime introduces a new Big Five personality types qualification: Now social science researchers can run studies targeted to the Big Five:
Topics: primepanels, turkprime, turkprime panels
New Feature! Restarts now available when using HyperBatch
Now available "Restart HIT"-- using HyperBatch!
The TurkPrime "Restart" feature (which Restarts HITs that have become sluggish; see our blog post on Restarts to see all the benefits of this feature) can now be used in conjunction with HyperBatch. This was a highly requested feature from our users and we are happy to announce that it is now LIVE. The only difference between using the restart feature with HyperBatch and using the restart feature without HyperBatch is that you will only be able to restart the HIT once per day with HyperBatch. This helps ensure that our system will not be over-burdened.
Topics: hyperbatch
Problem
Many researchers wish to target participants from specific states or regions of the United States like the Northeast or the West. The problem they often encounter is that using MTurk's Geographic Qualification to specify a particular state is often not adequate to ensure participants actually reside in the specified state.
Topics: amazon mechanical turk, exclude workers, IP address, IP block, location, region, survey, turkprime
Dynamic Secret Completion Codes for SurveyMonkey and SurveyGizmo
What is the completion rate and dropout rate?
Dropout rate is defined as the percentage of participants who start taking a study but do not complete it. Dropout rate is sometimes referred to as attrition rate, and is the opposite of completion rate (dropout rate = 100 – completion rate). On MTurk, completion rate is defined as the number of Workers who submit a HIT divided by the number of Workers who accept the HIT. Note that, for the definition of completion rate used here, Rejected Workers are counted as completes.
Topics: amazon mechanical turk, autoapproval, completion code, dynamic, mturk, secret code, surveymonkey, turkprime
How to Create a Universal Exclude Worker List
Problem:
Requesters may observe that some workers, even those with high Approval ratings, may not perform to their expectations on a study. Sometimes this may result in rejecting their work which affects the Worker approval rating. But, often the work is not acceptable for research but is not worthy of rejection, or, it may simply be the policy of the research lab to approve all assignments for IRB or some ethical standard they may follow.
Topics: amazon mechanical turk, block worker, exclude, exclude workers, mturk, mturk api, qualification, turkprime
Problem:
You are running a longitudinal study and have identified 1000 workers who you want to allow to take your second phase studies. How do you easily group those workers for easy access.
Or you want to exclude certain workers from taking a number of your studies and wish to group them for easy exclusion in future studies. How can you do that?
Topics: amazon mechanical turk, exclude, exclude workers, include workers, Longitudinal, qualification, turkprime, worker groups