Unmasked

Share this post
The CDC's Latest Study on Masks is Purposeful Misinformation
ianmsc.substack.com

The CDC's Latest Study on Masks is Purposeful Misinformation

An already discredited agency hits a new low

Ian Miller
Feb 7
186
43
Share this post
The CDC's Latest Study on Masks is Purposeful Misinformation
ianmsc.substack.com

“Misinformation” is one of the most overused terms in our modern world.

Instead of referring to information that is purposefully misleading, it’s now become an easy shorthand term for major media outlets when referring to information they don’t like.

But misinformation is real — there are people and officials and entire government agencies who disseminate information that is demonstrably incorrect in order to maintain their narrative or encourage compliance with their mandates.

Unfortunately, this isn’t a new phenomenon; at this point most people have come to realize that not everything reported by the government and mainstream media is accurate.

But this practice has clearly become more intensely repetitive and disturbing during the COVID-19 pandemic.

This is not simply because agency recommendations and government mandates have completely failed to accomplish what was promised while they’ve lied to cover it up, but because the government and its partners are now openly advocating for censorship of those who expose their shortcomings.

The CDC has had a credibility problem for quite some time, beyond their flip flopping and unfortunate cooperation with teacher’s unions, they’ve amplified some truly terrible studies to justify their recommendations, a number of which are chronicled here.

But this latest study, their most recent attempt to defend their endless mask recommendations, is truly unconscionable.

There are so many flaws it’s hard to even know where to start, but it’s important to debunk this level of purposefully misleading garbage because it’s being shared by the usual misinformation crowd.


Not Statistically Significant

A definition of statistical significance is “…the claim that a result from data generated by testing or experimentation is not likely to occur randomly or by chance but is instead likely to be attributable to a specific cause.”

Most well constructed studies do not attribute an outcome to a specific cause without statistical significance.

For example, in the DANMASK study, which was a randomized controlled trial designed to test the hypothesis that mask wearing would prevent infection with COVID, their results pointedly reference the lack of statistical significance to any measurement:

In a per protocol analysis that excluded participants in the mask group who reported nonadherence (7%), SARS-CoV-2 infection occurred in 40 participants (1.8%) in the mask group and 53 (2.1%) in the control group (between-group difference, −0.4 percentage point [CI, −1.2 to 0.5 percentage point]; P = 0.40) (OR, 0.84 [CI, 0.55 to 1.26]; P = 0.40). Supplement Figure 2 provides results of the prespecified subgroup analyses of the primary composite end point. No statistically significant interactions were identified.

Statistical significance is an important tool — except for the manner in which the CDC and these researchers used it.

Here is the graphic that the CDC publicized, which was promptly used and redistributed by political activists in order to prove masks work:

This figure describes how people who wore a face covering were less likely to test positive than people who didn’t wear one.

There’s a lot going on here, so you’d be forgiven for not observing one of the most important elements — the symbol next to the “cloth mask” notation.

Notice that it corresponds to a sentence at the very bottom of the graphic, to the left of the MMWR logo. It’s hard to see, so I’ll repeat it here:

“Not statistically significant”

The CDC posted this graphic, which will be used to inform public policy, local school boards, politicians and corporate executives, and purposefully included a result that was not statistically significant.

That’s misinformation.

It’s an intentional attempt to deceive the public by utilizing a result that did not meet the bare minimum requirements to be “significant” in order to push an agenda.

It’s the textbook definition of misinformation and should be included among the long series of discrediting statements from the CDC. Posting a graphic with a non-statistically significant result highlighted as a conclusion should not be acceptable. But that’s exactly what the CDC did.

And they weren’t done yet.


Self-reporting

Survey results are often incredibly misleading, due to biases in self-reporting. People may often lie or misremember when asked questions by someone they perceive as an authority.

In fact, Jason Abaluck, one of the chief architects of the Bangladesh mask study, which was designed to attract news coverage by concluding that masks worked, pointed out one of the chief flaws in this methodology himself:

Twitter avatar for @JabaluckJason Abaluck @Jabaluck
@johnweeast @MCCCANM @AlexisKat6 @SamBraslow @Yale @StanfordMed It would be absurd to conclude from the mask data you can never trust *anything* anyone says. What is true is that people's recollections about normative behaviors are often biased -- if you think you're supposed to wear a mask, you overstate mask-wearing.

October 8th 2021

3 Likes

When you think you’re supposed to answer that you wore a mask, you overstate mask-wearing.

In one of his many defenses of his work which deliberately misled media outlets into thinking that masks were effective, Abaluck again highlighted that self-reporting can often be unreliable:

Twitter avatar for @JabaluckJason Abaluck @Jabaluck
@ElonBachman @ianmSC First, let's grant the considerable assumption that self-reported mask use = real mask use (we know it generally doesn't). Suppose masks prevent 50% of cases and deaths so: observed cases_t = (potential cases_t)*(1-fraction wearing masks*0.5).

September 6th 2021

1 Like

So a researcher who purposefully sliced his results to reach an extraordinarily weak outcome in order to sell masking acknowledged that self-reported mask usage is not a reliable measurement.

What did the CDC do here?

They relied on self-reported data.

After obtaining informed consent from participants, interviewers administered a telephone questionnaire in English or Spanish. All participants were asked to indicate whether they had been in indoor public settings (e.g., retail stores, restaurants or bars, recreational facilities, public transit, salons, movie theaters, worship services, schools, or museums) in the 14 days preceding testing and whether they wore a face mask or respirator all, most, some, or none of the time in those settings. Interviewers recorded participants’ responses regarding COVID-19 vaccination status, sociodemographic characteristics, and history of exposure to anyone known or suspected to have been infected with SARS-CoV-2 in the 14 days before participants were tested. Participants enrolled during September 9–December 1, 2021, (534) were also asked to indicate the type of face covering typically worn (N95/KN95 respirator, surgical mask, or cloth mask) in indoor public settings.

Everything in this graphic was determined based on self-reporting. There is no verification to any of it, it simply relied on people giving truthful and comprehensive answers to questions from a public health agency that repeatedly stresses the importance and moral value of mask wearing.

It’s completely and utterly ridiculous that this measurement was even conducted, let alone published as some kind of “scientific” study.


Sample Sizes

Perhaps the biggest contributor to increasing the odds of random chance and variance influencing your result is sample size.

For example, baseball is generally considered the sport with the least variance between teams, meaning that any individual outcome is incredibly hard to predict. The gaps between teams is so small that even the 162 game season often leads to wildly unpredictable results, with a five or seven game playoff series being essentially random.

So when the CDC, knowing full well the ramifications of their recommendations on daily life, posts a study, you’d assume that it would be a thorough, far reaching examination with thousands or tens of thousands of non-mask and always masked participants to ensure that the randomness and chance from small sample sizes can be safely minimized.

How many “control group” participants who never wore masks did this study have, then?

Overall, 44 (6.7%) case-participants and 42 (3.6%) control-participants reported never wearing a face mask or respirator in indoor public settings (Table 2), and 393 (60.3%) case-participants and 819 (69.6%) control-participants reported always wearing a face mask or respirator in indoor public settings.

44 who tested positive and 42 who didn’t.

That was their sample size for people who self-reported not wearing masks. 86 people total.

Here’s the full table highlighting the massive disparity in numbers:

86 total people reported not wearing a mask and 1,742 reported mask usage. The CDC and these researchers thought that was an appropriate distribution. Their entire graphic is based on adjusted odds and p-values from 44 people who reported a positive COVID test and claimed no mask usage.

44 people.

From February 2021-December 2021.

In a state with 39.5 million people.

Not to mention that of those who self-reported a positive test, 93.3% said they wore a mask some, most or all of the time.

But that wouldn’t make for a very convincing graphic, now would it? Here’s an example of how that would look:

Doesn’t look as good for the mask users, does it?

Obviously this is a base rate issue, but it illustrates perfectly that highlighting & promoting desired results is what the CDC does best.

They took an infinitesimal sample size and adjusted the odds ratios to claim a statistically insignificant result in favor of cloth masks.

That doesn’t include the issues with self-reporting that might lead to bias in the results, such as over reporting in compliance and reluctance to admit positive tests by those who always wore masks, just as one example.

The Science™.


Confidence Intervals

Perhaps the worst element of this study is the methodology used to create the misleading percentages from the earlier graphic.

We’ll return to that in a minute, but here’s the description from the study:

This analysis was not restricted to persons with no self-reported known or suspected SARS-CoV-2 contact given that this secondary analysis was underpowered upon exclusion of these participants (N = 316) because adjusted models did not converge. Instead, models adjusted for history of known or suspected contact as a covariate. In a sensitivity analysis restricting to participants who did not report known or suspected contact (N = 316), conditional logistic regression models were used to estimate that the unadjusted odds ratios of face mask use by type of face mask with matching strata defined by the week of SARS-CoV-2 testing: 0.13 (95% CI = 0.03–0.61), 0.32 (95% CI = 0.12–0.89), and 0.36 (95% CI = 0.13–1.00) for N95/KN95 respirators, surgical masks, or cloth masks, respectively, relative to no face mask or respirator use.

Did you notice the first sentence?

“This analysis was not restricted to persons with no self-reported known or suspected COVID contact given that this secondary analysis was underpowered upon exclusion of these participants because adjusted models did not converge.”

They didn’t exclude people with potentially known close contacts!

This is absolutely insane. You’ve now completely corrupted the entire point of the investigation. How are you supposed to determine whether or not mask wearing in public places made a difference if you have no idea if those who tested positive were infected because of a household case?

It’s the height of intellectual dishonesty to even publish these results, let alone create a graphic based on this data. Essentially, the researchers knew it would be impossible to ascertain meaningful results if they excluded those with potential known contacts, so they just shrugged and included them anyway. Absolutely remarkable.

This is yet another example of the issues with sample sizes mentioned above. They just simply didn’t have enough people to obtain meaningful results so they committed statistical malpractice to create the graphic they wanted.

It’s not even worth discussing the actual confidence intervals given the utterly useless criteria, but for the sake of thoroughness, we’ll look at those too:

conditional logistic regression models were used to estimate that the unadjusted odds ratios of face mask use by type of face mask with matching strata defined by the week of SARS-CoV-2 testing: 0.13 (95% CI = 0.03–0.61), 0.32 (95% CI = 0.12–0.89), and 0.36 (95% CI = 0.13–1.00) for N95/KN95 respirators, surgical masks, or cloth masks, respectively, relative to no face mask or respirator use.

0.03-0.61
0.12-0.89
0.13-1.00

This is useless. It’s nearly the entire range of possibilities from zero benefit to a huge benefit. It’s nonsense.

They underpowered the results with small sample sizes (35 total people in 3+ months never wore masks), included participants who should never have been included, and got useless results that they promoted anyway. Completely unconscionable.


Reality

When you understand just how unbelievably incompetent the methodology was, how underpowered the investigation is, how misleading the confidence intervals were and how desperate they were by including non-statistically significant results, it makes sense that reality completely contradicts their disastrously bad graphic.

N95 mandates haven’t mattered:

95+% measured observational (not self-reported) mask compliance indoors didn’t matter:

And importantly, look at when the study starts and ends:

Wondering why the numbers look low? Because here’s what happened afterwards:

Whoops!

Not to mention the issues with vaccination status among participants and reasons for testing

This “study” is absolutely ridiculous, and a new low point for the CDC. They’ve completed their divorce from science, data and evidence and moved exclusively into political advocacy.

We’ve learned over the past few years that they are completely untrustworthy and purposefully misleading to suit their needs, but this is a new level of misinformation even for them.

It will, however, achieve desired purpose — countless retweets from credentialed political activists and media coverage by those too blinded by ideology to see the devastating flaws.

Misinformation is dangerous, and all too often it comes from the CDC.

Share

43
Share this post
The CDC's Latest Study on Masks is Purposeful Misinformation
ianmsc.substack.com
43 Comments

Create your profile

0 subscriptions will be displayed on your profile (edit)

Skip for now

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.

Casey Preston
Feb 7Liked by Ian Miller

Now just imagine the studies they didn’t publish because they didn’t prove that masks work.

Expand full comment
ReplyGive giftCollapse
6 replies
fortiori
Feb 8

Masks are incredibly dangerous and ineffective (well, they are actually extremely effective for their intended purpose, which has nothing to do with stopping a virus), this will remove all doubt:

tritorch.com/Maskerade

The report at the end of that article is the best evidence you will find anywhere on the dangers and ineffectiveness of masking.

Expand full comment
ReplyGive giftCollapse
3 replies
41 more comments…
TopNewCommunity

No posts

Ready for more?

© 2022 IM
Privacy ∙ Terms ∙ Collection notice
Publish on Substack Get the app
Substack is the home for great writing