U.S. Religious Knowledge Survey
FAQs About Measuring Religious Knowledge (updated)
The release of the U.S. Religious Knowledge Survey has generated record traffic on the Pew Forum’s website as well as many individual emails from readers. We are very grateful for the feedback. Here are some of the most common questions we’ve been receiving.
Why didn’t you include smaller religious groups such as Muslims, Buddhists, Hindus, Orthodox Christians and Jehovah’s Witnesses in your survey results?
Members of these relatively small (in the U.S.) religious groups were included in the survey, and their answers are reflected in the total figures; no one was excluded from the survey on the basis of their religious affiliation. Unfortunately, however, there are too few members of these smaller religious groups in our survey sample to analyze and report their results separately. Greek and Russian Orthodox Christians, Muslims, Buddhists, Sikhs, Hindus, Wiccans, Jehovah’s Witnesses, Baha’is and numerous other religious groups each comprise less than 1% of the U.S. adult population, according to the Pew Forum’s 2007 U.S. Religious Landscape Survey. As a result, even in a relatively large, nationally representative survey of 3,412 people, we still had fewer than 30 Muslim respondents and fewer than 20 Hindus, for example. In keeping with sound survey research practice, we generally do not report separate results for any group that has fewer than 100 members in the sample.
Your website slowed down while I was completing the online quiz, and my scores were not recorded, so how can I trust your survey?
We sincerely apologize for the inconvenience. The huge numbers of people trying to take the online quiz overwhelmed our Web servers, and at peak periods, might have timed out or miscalculated the quiz results. We very much appreciate the patience of the many people who returned to complete the quiz and view the report.
Rest assured, however, that the online quiz has absolutely no bearing on the national survey results. The national survey was conducted by telephone and was completed long before the Web quiz went online. The quiz contains fewer than half – just 15 – of the 32 religious knowledge questions that were included in the telephone survey. People who take the online quiz can see how they do on those 15 questions compared with the nationally representative sample of U.S. adults who took the telephone survey between May 19 and June 6, 2010.* But the online quiz and the telephone survey are totally separate. How people do on the quiz does not change the results of the survey.
I think the survey question about nirvana had two possible correct answers, Buddhism and Hinduism.
The question asked, “Which of these religions aims at nirvana, the state of being free from suffering? Islam, Buddhism or Hinduism.” We scored Buddhism as the correct answer because nirvana is a central aim in Buddhism. It is true, however, that there is a similar concept in Hinduism, called “moksha.” Some survey respondents who are familiar with Hinduism might have equated moksha with nirvana and therefore chosen Hinduism as the answer. The results on this question were: 36% of respondents said Buddhism, 16% said Hinduism, 5% said Islam and 43% said they didn’t know. If both Buddhism and Hinduism were treated as correct answers to this question, what would that do to the survey’s overall results? The average number of religious knowledge questions answered correctly would go up slightly, from 16.0 to 16.2. There would be a small bump upward in the overall scores for every religious group analyzed in the survey, but there would be no substantive change in the findings about which religious groups did best overall.
Isn’t Saturday, rather than Friday, the correct answer to the question about the Jewish Sabbath?
No. The exact question wording was: “When does the Jewish Sabbath begin? Does it begin on Friday, Saturday or Sunday?” The Jewish Sabbath begins on Friday evening, at sundown, and continues until sundown on Saturday. Of the more than 200 Jewish respondents to the survey, 94% gave Friday as their answer to this question. Among the general public, however, just 45% got this question right.
Why did you include seemingly irrelevant questions about politics and other subjects in this survey if it was aimed at measuring religious knowledge?
For two main reasons. First, we wanted to be able to compare people’s levels of religious knowledge with their levels of general knowledge. Second, we were concerned that if we asked questions only about religion, then people who are not particularly interested in religion, or who don’t know much about religion, would break off the interview and not complete the survey. That would yield a biased sample, one with greater religious knowledge than the public as a whole. So we began the survey by asking a question we often ask in political polls: “All in all, are you satisfied or dissatisfied with the way things are going in this country today?” We also included questions about history, literature and science.
What is oversampling, and why did you oversample some religious groups and not others?
Oversampling refers to a variety of techniques that survey researchers sometimes use to generate additional interviews with members of selected groups over and above what would be expected in a national sample. In the U.S. Religious Knowledge Survey, we oversampled four religious groups that each represent about 2% of the adult U.S. population, according to our 2007 Religious Landscape Survey: Mormons, Jews, atheists and agnostics. This was accomplished by re-contacting some members of those groups who had been identified in previous surveys, then re-weighting the sample to reflect their actual share of the overall population. (For more details, see the methodology section of the survey report.) To go below the 2% threshold and oversample the numerous religious groups whose members make up less than 1% of the population would be substantially more difficult and costly.
Did the survey reflect the most important things to know about religion?
Not necessarily. Nor was it meant to test mere trivia. With help from a panel of experts (including Boston University Professor Stephen Prothero, author of the 2007 book Religious Literacy), the Pew Forum selected questions intended to serve as indicators of how much people know in several areas: religious history, teachings of major religions, religious leaders, Holy Scriptures, the global geography of religion and the role of religion in American public life. The questions included in the survey were intended to be representative of a body of important knowledge about religion; they were not meant to be a list of the most essential facts.
Were some of the survey questions too hard – or too easy?
To discern differences in knowledge, the questions varied widely in difficulty. The harder questions helped to separate out people who are very knowledgeable about religion; the easier questions helped to differentiate the least knowledgeable people from the rest of the public. The portion of respondents who got each question right ranged from 8% to 89%, and there was a nearly perfect “bell curve” to the overall pattern: On average, Americans correctly answer exactly half the religious knowledge questions (16 out of 32). Very few people got all or nearly all of the questions right, and very few got all or nearly all wrong.
Do the people who agree to take a survey like this tend to be more interested in – and more knowledgeable about – religion?
This was a major concern in the design and testing of the questionnaire. To try to minimize such “nonresponse bias,” the telephone survey began with nonreligious questions, the order and wording of the questions were designed to keep respondents engaged, and each interview was held to an average of roughly 20 minutes, about the same as other Pew Research Center surveys. The Pew Forum also kept careful track of people who quit midstream. All telephone surveys suffer “break-offs”; the break-off rate for this survey was not abnormal. That said, those who broke off partway through the questionnaire were doing worse than average up to that point. If all the people who broke off had completed the survey, the average number of correct answers to the religious knowledge questions may have been lower by one question.
Did this survey test people’s recall rather than “real” knowledge?
A person may have learned a fact and know it – it is stored in memory – but just cannot recall it on the spur of the moment. For this reason, the survey consisted mostly of multiple-choice questions, which test people’s ability to recognize a correct answer from a list rather than their ability to recall facts on demand. But four knowledge items – three related to religion, and one not – were asked in an open-ended format, mainly to allow comparison with past surveys that asked those same questions.
How did the survey account for people guessing at answers when questions were multiple-choice?
Guessing was possible, though interviewers subtly discouraged it, telling each respondent twice in the survey, “If you don’t know the answer just tell me and we’ll move to the next question.” For all religious knowledge questions combined, a quarter of the time respondents volunteered that they did not know the answer, about the same rate as they gave wrong answers. The number of “don’t knows” peaked at 71%, on a question about the Jewish philosopher Maimonides.
* Correction added April 2011: Interviewing for the survey actually was completed on Friday, June 4, 2010. (return to text)
Photo credit: Eric Swanson/Corbis