Response from Devin Pope, on religious attendance

All of this is from Devin Pope, in response to Lyman Stone (and myself).  Here was my original post on the paper, concerning the degree of religious attendance.  I won’t double indent, but here is Devin and Devin alone:

“I’m super grateful for Lyman’s willingness to engage with my recent research on measuring religious worship attendance using cellphone data. Lyman and I have been able to go back and forth a bit on Twitter/X, but I thought it might be useful to send a review of this to you Tyler.

For starters, I appreciate that Lyman and I agree on a lot of stuff about the paper. He has been very kind by sharing that he agrees that many parts of my paper are interesting and “very cool work”. Where we disagree is about whether the cellphone data can provide a useful estimate for population-wide estimates of worship attendance. Specifically, Lyman’s concerns are that due to people leaving their cellphones at home when they go to church and due to questionable cellphone coverage that might exist within church buildings, the results could be super biased. He sums up his critiques well with the following: “Exactly how big these effects are is anyone’s guess. But I really think you should consider just saying, `This isn’t a valid way of estimating aggregate religious behavior. But it’s a great way to look at some unique patterns of behavior among the religious!’ Don’t make a bold claim with a bunch of caveats, just make the claim you actually have really great data for!” This a very reasonable critique and I’m grateful for him making it.

My first response to Lyman’s concerns is: we agree! I try to be super careful in how the paper is written to discuss these exact concerns that Lyman raises. Even the last line of the abstract indicates, “While cellphone data has limitations, this paper provides a unique way of understanding worship attendance and its correlates.”

Here is where we differ though… To my knowledge, there have been just 2 approaches used to estimate the number of Americans who go to worship services weekly (say, 75% of the time): Surveys that ask people “do you go to religious services weekly?” and my paper using cell phone data. It is a very hard question to answer. Time-use surveys, counting cars in parking lots, and other methods don’t allow for estimating the number of people who are frequent religious attenders because of their repeated cross-sectional designs.

There are definitely limitations with the cellphone data (I’ve had about 100 people tell me that I’m not doing a good job tracking Orthodox Jews!). I know that these issues exist. But survey data has its own issues. Social desirability bias and other issues could lead to widely incorrect estimates of the number of people who frequently attend services (and surveys are going to have a hard time sampling Orthodox Jews too!). Given the difficulty of measuring some of these questions, I think that a new method – even with limitations – is useful.

At the end of the day, one has to think hard about the degree of bias of various methods and think about how much weight to put on each. The degree of bias is also where Lyman and I disagree. In my paper, I document that the cell phone data do not do a great job of predicting the number of people who go to NBA basketball games and the number of people who go to AMC theaters. I both undercount overall attendance and don’t predict differences across NBA stadiums well at all.

The reason why Lyman is able to complain about those results so vociferously is because I’m trying to be super honest and include those results in the paper! And I don’t try to hide them. On page 2 of the paper I note: “Not all data checks are perfect. For example, I undercount the number of people who go to an AMC theater or attend NBA basketball games and provide a discussion of these mispredictions.”

There are many other data checks that look really quite good. For example, here is a Table from the paper that compares cellphone visits as predicted by the cellphone data with actual visits using data from various companies:


The cellphone predictions in the above table tend to do a decent job predicting many population-wide estimates of attendance to a variety of locations. The one large miss is AMC theaters where we undercount attendance by 30%. Now about half of that undercount is because the data are missing a chunk of AMC theaters (this is not due to a cellphone pinging issue, but due to a data construction issue). But even if one were to make that correction, we undercount theater attendance by 15%.

Lyman argues that one should be especially worried about undercounting worship attendance due to people leaving their phones at home. I agree that this is a huge concern that is specific to religious worship and doesn’t apply in the same way for trips to Walmart. I run and report results from a Prolific Survey (N=5k) that finds that 87% of people who attend worship regularly indicate that they “always” or “almost always” take their phone to services with them. So definitely some people are leaving their phones at home, but this survey can help guide our thinking about how large that bias might be. Are Prolific participants representative of the US as a whole? Certainly not. There is additional bias that one should think about in that regard.

Overall, my view is that estimating population-wide estimates for how many people attend religious services weekly is super hard and cellphone data has limitations. My view is that other methods (surveys) also have substantial limitations. I do not think the cellphone data limitations are as large as Lyman thinks they are and stand by the last line of the abstract that once again states, “While cellphone data has limitations, this paper provides a unique way of understanding worship attendance and its correlates.”

All of that was Devin Pope!


Comments for this post are closed