A Snapshot in Time: Contextualizing the Community Impact Survey | 25, Issue 14
It has been more than a year since COVID-19 began to change the world. For many of us, the pandemic and uncertainty it brings continue to impact our daily lives.
KATIE VON DER LIETH revisits the findings of the SCA’s COVID-19 Community Impact Survey while exploring the role of “snapshot” surveys in a greater ecosystem of market data.
About a week after lockdown orders were implemented in California, we did what many terrified and recently quarantined people did: we connected with our friends. We are fortunate to count Carla Martin, the Executive Director of the Fine Cacao and Chocolate Institute (FCCI), as one of them. We learned that, in response to the pandemic, the FCCI had launched a survey to understand its impact on the chocolate and cacao community, and we were inspired to do the same.
Using the FCCI survey as a jumping-off point, we launched two initial surveys: one in March and one in June, with over 2,300 respondents combined. We also launched two series of webinars and videos to share the results and provide information on topics that we identified as key needs for our community members, including marketing, delivery, getting into grocery, and COVID-19 transmission and safety.
This rapid collection and sharing of data has value for our community, but there are also gaps and limitations to this type of survey. It is not that we were unfamiliar with conducting surveys—we’re regularly involved with surveys ranging from event feedback to roaster and retailer financials—it was that we had never collected this type of data before. Not only was our target population incredibly broad, but gathering data about a rapidly changing global pandemic that was presumably impacting every node of the value chain in a unique way made data analysis complicated. Given the sheer number of variables, there are almost infinite potential paths of analysis.
But in the desire to get this important information to our community as fast as possible, we forged ahead without knowing the path or what we would find. Now that we have two surveys under our belt and have just launched a third, it’s time to ask ourselves: Was this a worthwhile endeavor? What did we learn? How can we do better?
Looking to Academia for Guidance
In attempting to understand how useful this kind of survey is, we looked to academia for guidance. As it would happen, there is currently a very lively debate in the academic community about how research has historically been framed and accepted by academia into the “official” body of knowledge (namely, peer-reviewed academic journals). Much of this debate focuses on questions of the importance of statistical significance, novelty, and originality in the value of research.
In many academic disciplines, the intellectual merit of the research depends largely on a statistical calculation called a “probability value,” or “p value.” Without wading too deeply into the world of statistics, researchers will often conduct a statistical test to determine whether their results are “statistically significant.” The statistical test spits out a p value, and if the p value is .05 or smaller (meaning there is a less than 5% likelihood that the results are due to chance), then the results are deemed significant, and this is often a criterion the academic journals use for publication.
In addition to statistical significance, novelty and originality are also highly valued in the academic community. Academic publishers tend to reject research with uninteresting results or find no relationship between variables in favor of interesting and/or counterintuitive results. These criteria, in combination with other publishing requirements such a methodological integrity, the cost to publish, and command of the journal’s publication language, make for very high barriers to publication. Top academic journals publish fewer than 10% of submitted articles.[1]
From an academic standpoint, the rigorous, grueling nature of peer-reviewed publication has been a necessary process to ensure our shared body of knowledge is unbiased and as accurate as possible. In light of the distressing rise of “fake news” outlets and articles, which are characterized by a lack of editorial scrutiny (in addition to their oft nefarious nature),[2] this rigorous process feels more necessary than ever.
But there is a debate about the merits of this system, and it boils down to a question of value: How do we decide what information is worth sharing and how do we frame it responsibly?
In the case of academia, the challengers of the status-quo process argue that the rigidity of the p value threshold and the bias towards novelty have led to an over-confidence in “significant” results and a huge knowledge gap of “what didn’t work” or even “what was expected.” This phenomenon is often referred to as “publication bias.” This tendency by academic journals to publish results that are significant, novel, and counterintuitive, and not to publish results that are expected, quotidian, or somehow flawed warps our view of reality; we think publications are more representative than they actually are because we don’t see what goes unpublished.
This conversation helps us to situate the work we’ve done with our quick COVID-19 snapshot surveys; it tells us that there is substantial value in sharing expected and imperfect research, provided it’s framed responsibly. Even “unexciting” results are important! The same discourse also confirms that the best way to frame this kind of work is through thoughtful and transparent collection and interpretation, providing context, and accepting uncertainty.
Context Corner: Comparing and Contrasting Results
We did a fair job adhering to this framework—describing the survey as “illustrative” rather than representative, acknowledging that the data was likely subject to self-selection bias,[3] and providing multiple possible interpretations—but improvements can always be made, especially when it comes to providing additional context.
At the time of releasing the survey results, there were surveys from other organizations that were available and could have provided valuable contextual information to corroborate or challenge our findings. So, with that in mind, let’s revisit some of the most interesting findings from the data from our June 2020 survey, this time providing more context.
One such trend was the transition away from in-person sales towards takeaway, online sales, and delivery. Nearly half (43%) of respondents in our survey experienced a “significant increase” in online sales, and the ability to build an online presence and market was a commonly cited adaption strategy by many respondents. Unsurprisingly, this data accords with findings from several other sources. This includes data presented in this issue’s Insight, which shows the myriad ways coffee companies across the globe have pivoted rapidly to takeaway and e-commerce to provide a specialty coffee experience to consumers at home. Data from Square, the point-of-sale system common in coffee shops, similarly showed a 521% increase in the number of sellers offering curbside and pickup services.
A survey by Caravela, a Latin American coffee exporter and importer, also corroborated this trend. Results from 143 specialty coffee roasters found that strengthening online sales and focusing on direct-to-consumer sales were two of the most popular strategies. It is clear that developing a robust online presence has been and will likely continue to be an important strategy for survival during the pandemic, and perhaps beyond.
Another trend worth re-examining is sentiment of the coffee community. At the time the survey was disseminated in June 2020, respondents indicated a more positive sentiment than we expected. Among English-speaking respondents, a remarkable 46% of respondents expected to be at or above their pre-COVID-19 sales in nine months’ time. And though Spanish speakers were slightly more negative, 66% of respondents expected to be at 75% or more of their pre-COVID-19 sales, also in nine months’ time.
These findings appear to contradict the overwhelmingly negative outlook of coffee shops as reported by several news outlets and research organizations.[4] While this discrepancy could be related to potential self-selection bias of our survey (businesses that are faring better are more likely to respond, and those that have gone out of business are unlikely to respond), the timing of the survey could have played a role in respondents’ sentiments. Our results are from June 2020, a time of relatively lower rates of COVID-19 in the United States, South Korea, China, and the United Kingdom, the four top responders in our survey. This could have positively skewed the results.
The final trend worth digging into again is our finding that smaller businesses tended to fare better than larger ones. We found that retailers with just one outlet reported lower decreases in sales as compared to retailers with two or more outlets, and one-outlet operations also had a more positive outlook on future revenues.
This finding is again corroborated by Caravela’s study. It reported that “smaller roasters … seem to have been less impacted than medium sized roasters, with 26% reporting no or minimum impact on their demand and 17% reporting that their sales were already above pre-COVID [levels].”[5]
This is in contrast with data from Euromonitor which finds that larger chains will fare better than smaller ones. One potential explanation for this discrepancy could be the previously mentioned self-selection bias: businesses that have closed or are not doing well were not motivated to take our survey, so we have an over-representation of more successful businesses which resulted in a positive bias.
Another plausible explanation could be related to our connection with the specialty coffee sector. Unlike Euromonitor, which uses macro-level data of all types of coffee shops, our community likely skews towards a more specialty population which is a unique microcosm of the coffee shop ecosystem. The impact of business size on resilience will continue to be an area of research for us, but meanwhile our evidence points to the agility of small businesses as a positive attribute.
Although there is a sense of hope at returning more closely to “normal” in future, there’s no doubt that the uncertainty of the pandemic will continue to play out across all kinds of market research in the years to come. But being able to interpret and contextualize the results of surveys, both “illustrative” and otherwise, is a valuable tool for any specialty coffee business. ◇
KATIE VON DER LIETH is the Research Program Manager at the Specialty Coffee Association and Coffee Science Foundation.
Find an overview of all the snapshot survey results in Coffee Retail Summit's Library or by participating in the virtual event April 13-14, 2021. Visit retail.sca.coffee.
Asking Questions
Whether you’re looking to understand the business impacts of the pandemic or simply conducting some grassroots consumer research with your customers, your results are only as good as the questions you ask. Here’s a quick guide to writing your own survey!
Start with the end. What are you interested in learning, and from whom? The entire process will be smoother if you have a clearly stated set of desired outcomes. Beware of scope creep! Ask yourself: Is a survey the right tool? Other data collection tools like interviews, observation, or secondary research might better answer your questions.
Research your topic. What information already exists about doing this type of research? Don’t do work that has already been done—many times, example questions and best practices can easily be found on the internet.
Create the survey. Generally, the shorter, the better! Put the most important questions at the beginning as some respondents will abandon the survey before completing it.
Write the questions. Questions should be clear—there should be no confusion about what is being asked. Avoid questions that use jargon, ask for sensitive information, contain multiple clauses, or might bias respondents towards a certain response.
Disseminate. How will you ensure people take your survey? Providing an incentive to participate can help increase your sample size—the more people you have, the more representative your data will be!
Interpret the data. Contextualize: How does your survey data sit within your experience? How does it compare to other available research and trends? Embrace diversity! Variance in data indicates a multitude of preferences and experiences.
References
[1] Nature, “Editorial Criteria and Processes,” n.d., https://go.nature.com/3uASNtI
[2] David M J Lazer et al., “The Science of Fake News,” Science 359, no. 6380 (2018): 1094–96.
[3] Self-selection bias occurs when survey respondents opt in to a survey, rather than being randomly selected. People who opt in are likely to share views or characteristics that are distinct from those who chose not to take the survey.
[4] Nick Brown, “Pandemic Erased Nearly a Quarter of US Coffee Shop Market, Report Shows,” Daily Coffee News, 2021, http://bit.ly/3dXCmBJ
[5] Alejandro Cadena, “Impact of COVID-19 on Specialty Coffee Roasters: Adapting to New Realities,” 2020, http://bit.ly/2NNBeWR
We hope you are as excited as we are about the release of 25, Issue 14. Both the print edition and the availability of these features across sca.coffee/news wouldn’t have been possible without our generous underwriting sponsors for this issue: Pacific Barista Series, BWT water+more, and Victoria Arduino. Thank you so much for your support! Learn more about our underwriters here.