Smoking Sticks and Carrots
This post is crossposted on the Beeminder blog.
Let’s talk about science! Beehavioral science. A new study published in the New England Journal of Medicine last week has been all over the news.  It’s much better than previous studies and statistics I’ve seen on the efficacy of commitment devices. Not because others have been down on commitment devices. On the contrary, I’ve been frustrated in the past by Beeminder competitors who tout statistics about how 80% or whatever of people who risk money succeed. For starters they usually don’t even distinguish from the hypothesis of “people will lie to keep from losing their money!” In other words, “80% succeed” may mean “80% either succeed or cheat and pretend to”. This study is robust to that, with saliva and urine tests to verify smoking cessation. But beyond that, other studies I know of haven’t accounted for the selection effect of only super serious people being willing to risk money. At the extreme, maybe anyone hard core enough to risk money is hard core enough to succeed regardless.
How does this study account for that? First, they use an intent-to-treat methodology. That means that they look at the results for everyone randomized into the commitment device treatment, even the ones who refused to participate.
And here’s the first interesting result: Only 14% of people assigned to the carrot-and-stick treatment — risking $150 to win $650 — were willing to play. But those 14% did so well (52% of them succeeded in quitting) that the whole intent-to-treat group still did significantly better than the control group of smokers trying to quit with no financial incentives.
Then there was the pure reward group. $800 with no strings attached for managing to quit smoking. 90% of people in this intent-to-treat group were happy to participate. Apparently 10% of people hate money.
(Aside: Maybe it’s just 5% that hate free money. Because the pure reward group was actually two groups: One was really no-strings-attached $800 for quitting smoking and 95% of those offered that accepted it. The other was a “collaborative reward” treatment where you were grouped with 5 other people and your rewards depended on the performance of the group. There was even a chatroom to encourage each other. 85% of people in that intent-to-treat group participated — it must’ve seemed like too much hassle to the other 15%. Or they didn’t hate money but they did hate people. In any case, since the individual vs group-oriented treatments had no significant effect on smoking cessation rates, the two variations were combined in most of the analysis. Hence the 90% overall acceptance rate for the pure reward group. And as an aside to this aside, collaborative penalties, like GymPact where the losers pay the winners, didn’t help either, compared to the individual version.)
Using straight up intent-to-treat analysis, pure rewards do best. Here are the key numbers for smoking cessation rates:
- 6% quit in the control group with standard treatment, no money
- 16% quit in the pure $800 reward group
- 10% quit in the commitment contract group risking $150 + $650 reward
- (52% quit in the subset of the commitment contract group (14% of them) who actually participated)
In other words, pure rewards yield the most smoking cessation, mostly because so many more people are willing to be incentivized that way. Speculating further — “if only we could get more people to accept the carrot-and-stick approach” — sounds super suspicious because we don’t know if the relatively huge success of the precommitters was simply because only people who were going to succeed anyway were willing to risk their own money.
But the authors did some fancy statistics and concluded the carrot-and-stick treatment really is better. In fact, they estimate that the people choosing the commitment contracts would have to be 12.5 times more likely to quit smoking on their own before you’d have to reverse the conclusion that carrot-and-stick results in more success than pure carrot.
I’m highly biased to believe that (and remember to “beware the man of one study”) but even if all the tricky statistics are wrong, we still have the result that, for cases where you don’t have a third party to fund rewards for you, you can always find a third party to collect your penalties.
 Coverage I can vouch for as being reasonable includes Marginal Revolution, NPR, and The New York Times. Cass Sunstein (of Nudge fame) also has a nice review of the results. (Hover over links for commentary.)
But reading mainstream media coverage (or blog coverage) of scientific papers is more often a telephone game. Papers have nice abstracts (or TL;DRs as the internet calls them) and introductions and discussion sections that are usually at least as readable as news articles, with the added bonus of not horribly misleading you about the conclusions of the research. As a matter of principle, I recommend skipping the above paragraph and going straight to the paper.