Have COVID-19 vaccines flattened the curve? This question sparked my interest in researching and writing the paper Worldwide Bayesian Causal Impact Analysis of Vaccine Administration on Deaths and Cases Associated with COVID-19: A BigData Analysis of 145 Countries, which is open for public review on ResearchGate. Below is a further explanation of this paper intended for the general public.
Context and Theoretical Background
This question and its answers are critical to understanding whether or not the widespread public policy of conducting experimental mass vaccination during a novel respiratory pandemic worked. In addition to the interest of virologists, biologists, physicians, and other scientists and researchers in this question, there is also an interest in this question from a public policy and political science perspective. As with any other public policy measure (e.g., park and recreation programs, additional lanes on a congested road, hunting and fishing regulations, etc.), the public policy measure of mass administration of vaccines worldwide to control COVID-19 cases and deaths must be evaluated for its effectiveness or not. The answer to this question arises from the questionable legal and ethical basis on which these vaccines were sold to the public and administered worldwide.
In fact, policy makers, governments, mainstream media, and many public and private entities in countries around the world have implemented vaccination requirements based on the assumption that the vaccines would be “effective” in reducing transmission, cases, and deaths. This paper presents a method for calculating whether or not this was the case using an algorithm inspired by Bayesian statistics.
Bayesian statistics is different from the statistics many of us learned in school. In school, we typically learned about sampling from an entire population (e.g., the probability of picking a particular color from a jar of 100 jellybeans) because this provides higher confidence in the inferred answers. Bayesian statistics, on the other hand, assumes that the size of the population from which you draw a sample is unknown, but that you have a set of observations on which to base your predictions. For example, imagine a covered jar of jelly beans that you cannot see into. You have pulled out 50 so far, noting the colors (10 red, 10 yellow, 5 green, 25 purple).
You are now asked to predict what proportion of the different colors will be in a jar with an unknown number of jelly beans. Bayes’ theorem and Bayes-inspired algorithms and statistics can be used in this type of question. [If you are interested in learning more about the philosophy behind Bayes theorem, please read here, and if you are interested in the mathematics and its applications, please read here and here]. This problem occurs with many data sets, such as weather data, stock prices, earthquakes, health data, and any other regularly recorded data that is part of an unknown set of future observations. For example, it is not possible to see the entire stock data set until after a company has completely collapsed, and until then people may have very different views of how the stock is doing. ENRON is presented here as an example of this phenomenon. Hindsight is better than hindsight, and for this reason, sampling from an entire population is preferable when possible.
Data observations related to COVID-19 are similar to jellybeans in a sealed jar in the sense that we do not know what the final population number will be (i.e., how many total cases or deaths we will see), and we receive new observations from around the world every day. Although Bayesian statistics are not preferable to frequentist statistics when possible (known population size), they are useful and applicable in our current context (unknown population size).
The method described in this paper uses a package called CausalImpact written in the R statistical programming language. This package was originally written by a team at Google (Brodersen et al. 2015) to analyze the impact of different advertising campaigns on the number of clicks they would receive from Internet users. The idea was to determine the difference between different regions where an advertising campaign was launched and another (or several) where it was not. The time when a new ad campaign started is called the “treatment” (vertical gray dashed line in the middle of the timeline).
By analyzing the data trends prior to the treatment, as well as the likely trend based on the control regions where the “treatment” was not implemented, a Bayesian-inspired algorithm can be used to predict where the data line would have gone had the treatment not been implemented. This predicted trend line is referred to as the “counterfactual” (blue dashed line) and is used as a comparison to the actual path of the data line after the treatment began (black line). The sum of the difference between the counterfactual trend line and the actual data trend line is recorded as the “causal impact” of the treatment (charts 2 and 3), which promotes either more or less impact, in the case of Google “clicks.” The same approach can be used to analyze COVID-19-associated cases and deaths both before and after treatment (before and after the introduction of public vaccination) to determine the impact of this policy on trends in cases and deaths.
The authors of this package have encouraged others to use it in innovative ways:
At the same time, our approach could be used for many other applications that involve causal inference. Examples include problems in economics, epidemiology, biology, or the political and social sciences. With the release of the R package CausalImpact, we hope to provide a simple framework for all these areas. Structural time series models are being used in an increasing number of applications at Google, and we anticipate that they will prove useful in many other areas of analysis as well (Brodersen et al., 2015, p. 271).
Each graph produced by this package contains three figures and the following information:
Figure 1 Original: Original data with counterfactual plot.
Figure 2 Point-wise: Daily impacts, up or down over time.
Figure 3 Cumulative: sum of all daily impacts over time.
Vertical gray dashed line: treatment onset.
Black solid line: actual data collected
Blue dashed line: prediction of the Bayesian model for the course of the data based on the previous observations and the control regions
Light blue fill around the blue line: statistical confidence level, tighter fill = more confidence in the estimate
It is important to highlight that the introduction of the vaccine itself has occurred at different rates in different countries, among different populations with different health standards and comorbidities, and with different types of experimental injections. These are just a few of the many confounding variables involved when considering public interventions and their effectiveness in different countries.
Assumed Short-Term Risk vs. Long-Term Behavior
Public policy analysis is not about how individuals respond to the implementation of public policy (that is what clinical trials are for), but rather how mass behavior responds to that implementation of public policy. From a public policy perspective, the administration of vaccines, although perhaps phased in across different age groups/occupations, was a nationwide mass campaign in which many governments mandated or attempted to mandate the use of these vaccines by age 5 years. This is obtuse public policy and, in this author’s opinion, should be evaluated as such. From a policy perspective, the administrative decision was made to force vaccines on the population or not (a binary decision), so the use of the CausalImpact package is appropriate.
According to this analysis of data for 145 individual countries, the results show that in more than 80% of the countries, the public policy of mass vaccination with novel vaccines against a novel coronavirus does not seem to reduce the number of cases of disease and death, but actually increases them beyond what the Bayesian model would have predicted. This effect is very pronounced in some countries (e.g., Vietnam, Mongolia, Thailand, Cambodia, Seychelles), where there are countries that had almost no cases or deaths for over a year before mass vaccination, and then start mass vaccination and have a jump in cases and deaths.
There are many possible reasons for these results, which other researchers have discussed and/or warned about before mass vaccination campaigns began, including leaking vaccines, low absolute risk reduction in the first place, a weakening effect on the immune system, a reaction to the spike protein cytotoxin that can present in the hospital as COVID-19-like symptoms and is therefore referred to as COVID-19 cases and deaths, mass false-positive tests, vaccine ineffectiveness against new variants, and others. These and other possibilities continue to be investigated.
Whatever the reason, this is a failure of public policy that has not produced the results promised by policymakers. To understand the full extent of the failures in public policy and media intent, one must look at pages 66-99 of the study to see the extent of the data from around the world, but here are some examples for the countries listed above:
Vietnam: +1099% causal impact of vaccine on total cases per million.
Seychelles: +1978% Causal effect of vaccine on total number of cases per million.
Mongolia: +3391% Causal effect of vaccine on total number of cases per million.
Laos: +6955% Causal effect of vaccine on total number of cases per million.
As part of a secondary analysis of the “causal effect” results, this study also considered how long the vaccines had been circulating in the population and how many vaccines had been administered to account for the possibility that gradual introduction by age group might influence the impact on COVID-19-associated cases or deaths. The study found no association between the duration of vaccine administration in the population and the “causal effects” of vaccination. The only moderate association (rho = 0.34, p = 0.001) was the positive association between more vaccines in the population and more “causal effects” on cases. In other words, the only thing we can say with certainty from these data is that more vaccines = more impact on case increase. This association between the total number of vaccines administered and the increase in deaths was not statistically significant.
These results were obtained in late October 2021 and finally uploaded to the preprint server in mid-November 2021. The predictive ability of the Bayesian algorithm used in CausalImpact shows its strength here. It was able to predict, before the start of the major winter flu season in the Northern Hemisphere and right at the start of Omikron, that more vaccines in countries are/would be associated with more impact on cases, but not necessarily more impact on deaths. This is almost exactly what we saw this winter in countries in Europe and North America and in highly vaccinated Israel, with Israel showing the worst results (it is also the country with the highest vaccination rate).
Causality and Future Research
The results of this study are temporal evidence of causality, which is the first criterion of the Bradford-Hill criteria for causality (i.e., acute secondary effect). The second criterion states that the results of causality must be predictable from other evidence or mechanisms. This has been demonstrated, in part, by the fact that the number of cases and deaths has increased as a direct result of vaccine administration, as predicted by many.
Whether or not this second criterion is met could be further examined using a number of biomechanically predictive markers of complications and adverse events associated with these novel vaccines. These data could include: Myocarditis/pericarditis in young people, heart attacks in young people, strokes in young people, cancer, autoimmune diseases, signs of decreased immune system response (e.g., recurrence of viruses such as shingles, herpes, etc.), and many other variables.
These data go back many years or decades and would therefore provide very good predictive capacity for the Bayes algorithm used in CausalImpact. If there is a dramatic increase in the rates of these predicted outcomes after this public treatment is initiated, this would satisfy the second causality criterion with much more supporting evidence. These studies are already underway and will continue as more data from 2021 and 2022 become available.
In short, we need more scientific research, as always, to confirm or refute these findings. The goal of this study was to provide a general global overview of the impact of mass vaccination during a respiratory virus pandemic. This was done in the hope of stimulating areas for future specific research that could be addressed using the causal impact analysis method described in this paper. In addition, this study was intended to draw the attention of constituents and policy makers to the data and ethical aspects of the impact of these experimental public policies.