All text responses were drawn from answers to an open-ended survey question fielded as part of Pew Research Center’s American Trends Panel (ATP). The survey was conducted Sept. 14 to 28, 2017, and the question asked:

We’re interested in exploring what it means to live a satisfying life. Please take a moment to reflect on your life and what makes it feel worthwhile – then answer the question below as thoughtfully as you can.

What about your life do you currently find meaningful, fulfilling, or satisfying? What keeps you going, and why?

In total, 4,492 of 4,867 panelists who completed the survey (92%) answered the question. Researchers used a semi-supervised computational model to identify topics within the responses, described in more detail here. This allowed the team to determine whether individual responses mentioned each of 30 particular topics.

In addition to analyzing aggregate trends in the responses, researchers also selected a sample of responses to include in this qualitative product. To select responses for inclusion in this list, the research team decided to prioritize interesting and unique answers over a representative sample of answers. In other words, their goal was not to provide a representative snapshot of responses, but rather to illustrate the broad range of themes found in the responses, providing readers with qualitative insights into how Americans from different walks of life find meaning in their lives.

To select a diverse sample of interesting and unique responses, researchers computed a “uniqueness” metric for each response, meant to capture the degree to which a response contained distinctive language or uncommon topics. This metric was partially based on how popular particular topics were across all answers. For example, if a topic was mentioned in 80% of all responses, it would be given a weight of 0.2, while a topic mentioned in 20% of responses would be given a weight of 0.8. Each response was then scored using the sum of these topic weights for all the topics mentioned in the response; those with more and/or rarer topics were scored higher than other responses. This served as the first component of the “uniqueness” score. For 108 responses that did not mention any topics but contained at least five words, researchers assigned the responses the maximum observed score, because the team wanted to examine responses that contained content that fell outside the scope of the common topics that had been identified by the researchers.

The second component of the uniqueness metric was computed using the average cosine similarity of every response to all other responses, providing a rough measure of how unique and distinctive the language in the response was relative to others, on a scale of 0-1. Researchers multiplied these two scores, so responses with particularly distinctive language and/or a large number of rare topics were given the highest scores.

Next, researchers selected the 1,000 responses with the highest scores and drew a sample of 250 responses from among these candidates. The sample was drawn with survey weights to ensure that the list of answers included responses from a diverse set of respondents, but the selected responses are not meant to be representative. Researchers then manually reviewed the responses, removing those that were repetitive, those that lacked specificity, and those that contained a great deal of personally identifiable information. Researchers replaced the cases that had been removed with other responses selected from the larger list of high-scoring responses, bringing the list back up to 250.

Researchers then cleaned this final selection of responses, making minor grammatical and spelling edits to improve readability. Since some of the responses contained information about respondents’ education, relationships, hobbies, and health and financial situations, researchers also edited the responses to preserve respondent confidentiality. For example, mentions of specific ages are approximate, and other time references (length of marriage, years since a major life event) are not exact (rounded up or down). Other specific references to unique personal details were removed completely in cases where removal of the phrase or sentence did not compromise the meaning of the response, and a few instances of strong language were also removed. After all responses had been cleaned in this manner, researchers made a final selection of 100 to include in the final product.