Link To TurkPrime

How to run successful experiments and get the most out of Amazon's Mechanical Turk

Sunday, October 18, 2015

WorkerId (and all MTurk Fields) Sent to Qualtrics

Background

TurkPrime can now automatically add Worker ID, HIT ID, and assignment ID to datafiles, including csv and SPSS. These fields can be very useful for matching Workers across multiple datafiles. For example, in longitudinal studies researchers typically have to ask Workers to provide their Worker IDs. These IDs are then used to alight the rows across multiple datafiles that are collected at different time points of the study. Relying on Workers to provide their Worker IDs however typically results in data loss, as some Workers do not enter their Worker ID correctly. Embedded query strings solve this problem by guaranteeing that Worker ID are correctly entered for each Worker.

 How to automatically add Worker ID and other fields to data files.

A video tutorial can be viewed here

On the Design Survey page go to  3. Setup HIT and Payment.

 


Scroll down to Query String Parameters where you can see some information about how this works 

Tuesday, August 25, 2015

Easily Copy Past HITs

TurkPrime has just released a Copy feature which makes duplicating past TurkPrime HITs simple and fast.

To copy the settings from an old survey to a new one, follow these simple steps:


  1.  Go to the HIT you want to copy in your Dashboard, and click the Copy HIT button in the Actions section.
  2. A message box will appear asking you which environment you want to launch your HIT into.



    You can then decide if you want to test out your HIT in Sandbox mode or if you want to review it in Live Mode. This feature is also useful if you originally launched your HIT in Sandbox and now you want to change it to Live.
  3. Once you choose where you want to launch your HIT, you will be taken to the Design Survey screen with all your original survey settings already filled in on the page. You can go over the settings and make any changes you want to before you approve the HIT. After you review the new survey settings just Approve the survey and launch it in your Dashboard, like you would with any other survey.













Monday, June 29, 2015

Our Micro-batching Feature will Improve the Quality of Your Data

MicroBatch


We have deployed a micro-batching feature which allows Requesters to break up their studies into smaller segments, and to include time intervals between the segments. We developed this feature to increase sample representation. This feature can also save 50% in MTurk fees.


The Problem

Here is why this feature is important for data quality. If a Requester launches a study on Monday afternoon, there may be a bias in the sample toward people who are not working on a weekday (unemployed, stay at home parents etc.) 

Monday, June 8, 2015

Bounce and Completion Rate

In the latest release of TurkPrime.com we added many new features and fixes among them 2 additional metrics for every survey:

  • Bounce Rate
  • Completion Rate

The Bounce Rate is the percentage of Amazon workers who previewed your survey but decided not to accept it. An open question is whether and how this self-selection of participants affects the representativeness of the participant pool. In addition, a high bounce rate may be an indicator that there is something wrong with your survey

Thursday, May 21, 2015

Association for Psychological Science Symposium

Please join us this Friday, May 21st at 1pm at our APS symposium in New York City. We will be presenting talks on TurkPrime and how to make the most of Mechanical Turk.




Wednesday, May 13, 2015

Reverse Rejections

Problem:

How can a Requester who rejected an assignment in error undo his mistake? A rejected assignment affects the Worker negatively and will often impact the Requester with negative feedback which can damage the Requester's online reputation which lowers Worker participation in future HITs. What can a Requester do to reverse the rejection?


Solution:

Reversing a rejection in TurkPrime is as simple as using the "Reverse Rejection" feature. Select the WorkerIds you wish to reverse, add an optional message. and you are done! No programming or installations are needed.

Maximizing HIT Participation


Problem:

How can you increase Amazon Mechanical Turk HIT Worker participation rates and speed completion of a HIT? This is particularly an issue with HITs that have a large number of required participants or have Qualifications that limit the number of qualified Workers

Solution:

By monitoring the participation rates of hundreds of HITs we have observed the following patterns that increase participation significantly:

Friday, May 8, 2015

Qualtrics (and other platforms) Mechanical Turk Integration With TurkPrime

Run Qualtrics Surveys on MTurk using TurkPrime

Running Qualtrics surveys on TurkPrime streamlines two major pain-points that Requesters have dealt with:

  1. Dynamic secret key that Workers must enter to verify completion of the survey where the code is unique per user and, therefore, un-shareable.
  2. Auto approval and rejection of worker assignments based on the dynamic secret key
  3. Passing the Mechanical Turk WorkerId and AssignmentId into Qualtrics so that Workers can be uniquely identified and matched up with MTurk.
The following describes these basic features and how to use them.


The TurkPrime Design Survey Form is intuitive and similar to the MTurk Requester HIT Design form except that it adds many useful features like exclude Workers, target specific workers and much more. Below is a portion of the TurkPrime Design Survey form where you specify the Survey Hyperlink.


Qualtrics Surveys With Workers Specific Secret Keys on MTurk using TurkPrime

Now, you can also run Qualtrics surveys with TurkPrime Worker Specific Secret Keys. That means that each worker will be assigned a unique secret key to demonstrate that they completed the linked Qualtrics survey. There is no possibility that workers are sharing a secret key and not completing the linked survey.

Other Platform Integration


The Dynamic Secret Completion Code can be easily integrated (and has been by many researchers) into other non-Qualtrics systems, as well. TurkPrime calculates the Secret Code based on query string parameters that TurkPrime adds to the url of your survey. These parameters are named:
  • a
  • b
  • c

As long as your survey platform ensures that those query string values are preserved in the url when a participant reaches the last page of your survey that contains the TurkPrime iframe, then a completion code should get displayed. 

This can be achieved by capturing the parameters within your study. Nearly all platforms support this functionality. Please contact us if you have questions..

Thursday, May 7, 2015

Exclude Workers With One Click

Problem:

Suppose you're running a Mechanical Turk survey and need to exclude workers who took a previous survey. How can you quickly set this up. 

Some of the currently used solutions require following multiple steps to set things up and are not turnkey solutions and others require Workers to enter their Worker ID, which may self-filter workers and limit the number of workers taking your survey. 

Solution:

Exclude Workers Feature 

Create your surveys using TurkPrime.com's "Exclude Workers" feature. When your HIT launches it will have a Qualification Requirement that will limit your HIT to only the Workers not in your exclude list. All excluded workers will be unqualified from taking your HIT.

Longitudinal and Follow up Surveys on Mechanical Turk

Problem:

Suppose you need to run a survey on Mechanical Turk and follow up with the same workers a week, month or year later. A few issues come up:

  1. How can I limit the follow up surveys to survey takers who completed the first study - without the worker needing to follow the link and enter their Worker ID? 
  2. Can I notify those workers of my follow up survey?
  3. Can I set up a follow up survey so that the workers who take that survey do not know why they were selected?

Solution:

Include Workers Feature 

Create your surveys using TurkPrime.com's "Include Workers" feature. When your HIT launches it will have a Qualification Requirement that will limit your HIT to only the Workers you allowed. All other workers will be unqualified from taking your HIT.

Tuesday, March 24, 2015

Finding More Mechanical Turk Workers Faster with TurkPrime "Restart HIT"

Problem: Suppose you need to run a HIT with 1000 Workers. Or a HIT that is only open to Workers who have an approval Rating of 95% or more and have completed 500 HITs or more. Although when you launched your HIT the MTurk Workers arrived at a nice pace, over time, the pace has slowed to a trickle such that your HIT will never complete. 

What can you do to speed up your HIT?



Solution: TurkPrime.com Feature: Restart HIT 

Simply use the TurkPrime "Restart" feature which Restarts HITs that have become sluggish. When you Restart a HIT you get the effect that it gets "bumped up" the Worker visibility as if your HIT just started. 

Monday, March 23, 2015

Creating Mechanical Turk Custom Panels with TurkPrime.com Worker Groups


Problem: Suppose you need to run a group of HITs open only to participants who are women under 50. You previously ran a HIT and know the Worker IDs that you want to reach, but have no way to email them and limit your survey to only them. How can you proceed?



Solution: TurkPrime.com Worker Groups and Worker Emails

1. TurkPrime recently added a new feature called Worker Groups which allows any MTurk Requester to create a Reusable Worker Group based on MTurk Worker's Worker ID.



Friday, March 13, 2015

IRB Template for Mechanical Turk and Turk Prime

Overview

An IRB will generally request a description of how participants will be recruited, reimbursed and interacted with. Additionally IRBs always request information about how anonymity of the participants is protected. Members of the IRB board may not be familiar with Amazon Turk, and it may be helpful to include a brief description of MTurk in your IRB application. Note that many MTurk studies will be exempt from review, provided that the nature of MTurk is explained clearly enough, and the anonymity of the data collection process is made clear.

Thursday, March 12, 2015

The New New Demographics on Mechanical Turk: Is there Still a Gender Gap?



Overview

Seventy five Mechanical Turk studies conducted with US-based Workers in  2013 and 2014 were reviewed. From a total of 32, 595 Workers, 15,324 (47%) were female.

Background

It’s been a while since the last update on the demographics of Mechanical Turk Workers, so we thought it’s time for a new look. The current consensus seems to be that MTurk Workers are primarily female. For example Panos Ipeirotis' blog reports that US-based Workers are 65% female. MTurk is always changing, and this report presents data from 75 studies conducted over the last two years.

Monday, March 9, 2015

A Simple Formula for Predicting the Time to Complete a Study on Mechanical Turk.


Overview

The simple formula

We describe a general formula for predicting the time it takes Workers to complete survey studies on MTurk. The average Worker takes 10.3 seconds to answer a single question. This means that a study with 60 questions should take approximately 10 minutes. At $6 per hour the appropriate pay rate for a 60 question survey would be $1.

The slightly more nuanced approach

We also show that increasing the pay rate and decreasing the length of a survey can increase the average time that Workers spend on each question by 36%. Pay rate and the number of questions in a HIT both influence how long Workers spend answering questions. Workers spend less time on each question for longer surveys and for surveys that pay less. Survey length is also a moderator of the association between pay rate and the time that Workers spend answering questions.

A more detailed approach to predicting the length of a survey is depicted in Figure 1, which takes both survey length and pay rate into consideration when predicting the time it takes Workers to answer a single question. For longer surveys with 108 questions or above, time per question is closer to 8.3 seconds per question, and is independent of pay rate. For medium length surveys with 65 questions, time per survey can range between 9.2 seconds for a pay rate of $1.8 per hour, 10 seconds per question at $3.50 per hour, and 10.6 seconds for a pay rate of over $5 per hour. Shorter surveys with 28 questions are answered at a rate of 10 seconds per question at $1.80 per hour, 11.5 seconds per question at $3.50 per hour and over 13.2 seconds for pay rates of over $5 per hour. Overall, higher  pay rates are most effective at increasing completion time for shorter surveys.

This approach probably also generalizes to non-MTurk online surveys and paper and pencil surveys, but more research should be done to compare completion time across different platforms.

Sunday, March 8, 2015

How to Minimize Dropout Rate on Mechanical Turk

Brief overview

It is generally thought that pay rate does not affect data quality on Mechanical Turk. For example (Buhrmester, Kwang, & Gosling, 2011) showed that whether Workers are paid 5 cents or one dollar for a survey study, the internal reliability of the surveys does not change. They did show however that fewer Workers will take the surveys that pay less. We recently replicated these findings for both US and India-based Workers (Litman et al, 2014). Here we show that low pay rates have two effects on Workers: 1) Workers are more likely to return a HIT before completing it and 2) Workers spend less time answering questions. We examined 30 MTurk studies that were run over the last 6 months. The findings show that 36% of the dropout rate variance is explained by the length and pay rate of a survey. These results show that low pay rates do more than just slow down the rate at which Workers take HITs. Low pay rates may  also negatively impact the representativeness of data due to high participant dropout, and they may also decrease how much attention participants pay to each question. Based on these findings we recommend against low paying HIT We also recommend against overly long surveys, unless Workers are appropriately compensated. To minimize dropout and to maximize time on task, compensation for HITs should not be below $4 per hour and should be closer to $6 per hour or more.

Friday, March 6, 2015

Determining Completion Rate and Dropout Rate on Mechanical Turk

What is the completion rate and dropout rate?
Dropout rate is defined as the percentage of participants who start taking a study but do not complete it. Dropout rate is sometimes referred to as attrition rate, and is the opposite of completion rate (dropout rate = 100 – completion rate). On MTurk, completion rate is defined as the number of Workers who submit a HIT divided by the number of Workers who accept the HIT. Note that, for the definition of completion rate used here, Rejected Workers are counted as completes.

Why is completion rate important?
Completion rate is an important indicator of data quality. A low completion rate indicates that there is a selection bias which may be influencing the representativeness of the results. A very high dropout rate may also mean that there is something wrong with the study. It is typically good practice to report completion rate in the method/results section of a paper. Indeed, some editors require authors to use the CHERRIES checklist for survey research (Eysenbach, 2004), which asks about a study’s completion rate.

Friday, February 20, 2015

Why use TurkPrime Panels?



MTurk Requesters are often interested in studying specific groups of people. For example, a researcher may be interested in men over 40, Republicans, people who are concerned about the cleanliness of sponges, or cancer survivors. TurkPrime Panels utilizes various techniques that make the process of acquiring specific MTurk samples faster and cheaper. We virtually guarantee to be able to get panels more economically than most Requesters are able to do on their own. Additionally, we can get the panels a lot faster, and eliminate the considerable amount of manual work that is required to obtain panels.

Friday, January 30, 2015

System Qualification Enhancements - US State Qualifications

Amazon just announced that their Worker Qualification now supports US State locations. It is currently available through their API and is also available through their Web Interface.

It is great to see that Amazon is adding features to their API; just a few months ago they added the ability to support Qualification Sets so that if workers match even one qualification they are permitted to complete a HIT.