Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Different websites (such as https://ivmmeta.com/, https://c19ivermectin.com/, https://tratamientotemprano.org/estudios-ivermectina/, among others) have conducted meta-analyses with ivermectin studies, showing unpublished colourful forest plots which rapidly gained public acknowledgement and were disseminated via social media, without following any methodological or report guidelines. These websites do not include protocol registration with methods, search strategies, inclusion criteria, quality assessment of the included studies nor the certainty of the evidence of the pooled estimates. Prospective registration of systematic reviews with or without meta-analysis protocols is a key feature for providing transparency in the review process and ensuring protection against reporting biases, by revealing differences between the methods or outcomes reported in the published review and those planned in the registered protocol. These websites show pooled estimates suggesting significant benefits with ivermectin, which has resulted in confusion for clinicians, patients and even decision-makers. This is usually a problem when performing meta-analyses which are not based in rigorous systematic reviews, often leading to spread spurious or fallacious findings.

Concluding, research related to ivermectin in COVID-19 has serious methodological limitations resulting in very low certainty of the evidence, and continues to grow. The use of ivermectin, among others repurposed drugs for prophylaxis or treatment for COVID-19, should be done based on trustable evidence, without conflicts of interest, with proven safety and efficacy in patient-consented, ethically approved, randomised clinical trials.

Source: Misleading clinical evidence and systematic reviews on ivermectin for COVID-19; https://ebm.bmj.com/content/early/2021/05/26/bmjebm-2021-111...



The only specific ctiticism is the following sentence, the rest is designed to create fear, uncertainty and doubt:

These websites do not include protocol registration with methods, search strategies, inclusion criteria, quality assessment of the included studies nor the certainty of the evidence of the pooled estimates.

This is false. The ivmmeta.com page says:

Typical meta analyses involve subjective selection criteria, effect extraction rules, and study bias evaluation, which can be used to bias results towards a specific outcome. In order to avoid bias we include all studies and use a pre-specified method to extract results from all studies (we also present results after exclusions).

They also compare prospective and retrospective studies and see no significant difference. You could have at least scrolled through the page before blindly copy-pasting something from Google.


One of the problem with meta analysis is the lack of pre-registration of most of the studies about ivermectin.

There are many studies in many places. Imagine just for a moment that some new drug has no effect. Anyway, 1/20 of them will get a fake statistical significant result. The problem is that this study with a fake good result has a high chance of been published. The other 19 have a high chance of not been published, because the doctors lost interest, because the journal don't like null results, or other similar reasons. So if you make a review of the published studies, you will only find only/mostly the ones that get a good result due to a fluke.

This is not an imaginary problem, this is a known problem. Imagine Big Pharma has a new drug, and they want to it approved. They can hire 100 universities/hospitals to run a clinical study, suggest that only the ones with a good fluke result get published and then they get about 5 good papers about the drug. Perhaps they don't even have to make a suggestion, because it's boring to write a paper about a drug that does not work so the medic just don't write it, and it's boring to read a paper about a drug that does not work so the journal just publish it. It's the same effect, and nobody is lying ;) .

The solution for this is pre-registration. Before starting the trial, you must register it, explain what drugs you are going to use, what are you going to measure. With pre-registration, now the 100 studies are public. If you later find 5 good and 95 inconclusive or bad or disappeared, you know in the meta analysis the 5 good studies are only flukes.


I understand the P-hacking problem, but if that was the source of the positive result in the meta analysis that would mean that scientists are throwing away 95% of their studies on one of the most promising COVID treatments during a pandemic. In this case there's not really a reason to throw away a negative result. Considering the controversy around Ivermectin a study showing a negative result would get a huge amount of citations and attention.

In any case, thank you for trying to use reason in your argument, we need more around this subject.


> scientists are throwing away 95% of their studies

My guess is that most of these are not studies on purpose. A MD or a Hospital tries a new drug that seam promising for a week/month.

If the result is good they get very happy, and try to make as many fuss as possible, that means that they publish their work in some journal.

If the result is bad they lose interest in the subject, or try to hide the wrong results under the carpet, or don't want to spend time in a paper that is very difficult to publish.

So the good results of the informal tries are published, and the bad result get lost. No conspiracy. Just people doing people things.

> one of the most promising COVID treatments during a pandemic

I don't think it's promising, but I think it's important to discuss about it and try to understand why it is necessary to get better evidence.


Then you have to explain why are results about Ivermectin more likely to be thrown out than studies about hydroxychloroquine (https://hcqmeta.com/), which has only 73.8% positive studies and shows much smaller improvement. Why are prophylaxis studies more successful than early treatment studies, which are more successful than late treatment studies? Are they more likely to be thrown out? Why are people throwing out studies exactly in a pattern to create a sort of pattern that would indicate a working drug?


Which studies are published and which are dropped is not so easy to calculate. Sometimes the team need to publish to get a grant, sometimes one graduate student needs a paper to get the degree, sometimes some of the team member is too famous or a friend of the editor. It vary a lot, papers get published with not good enough but "promising" results.

Also, if the main objective of the study (like reducing the number of death) is not successful, you can pick another criteria of successfulness like number of day in the hospital, viral load after a week, ... there are many to choice.

In particular in the fists study about hydroxychloroquine by Raoult, he claims that it's a success, but if you read the paper you can notice that there is no one death in the trial group and no deaths in the control group. (And the control group is not a randomized control group, just an unrelated bunch of patients in another city.)

About why early interventions with hydroxychloroquine get apparent better results:

That's a good question, but the publication bias is not the only source of problems.

It's difficult to be sure, but look at the study here in the OP. My guess is that the treatment with ivermectin includes implicitly phones call "Locatel" and that additional care from doctors improve the result. Other studies have similar additional care for the people under treatment.

And if you start the phone calls earlier, it's better that starting late. So you may measure it in the trials.

You may disagree with my explanation, and it's fine. The real problem is how to be sure if I'm wrong or you are wrong (or both). So the solution is to do double blind studies, were some people that get the medicine and other people that get a placebo that looks identical. But the important part is that both get all the additional health treatment. Both get phone calls that perhaps are useful. Both get automatic weakly nurse checks that are common in other treatments and perhaps are useful. ... So it's not necessary to guess if any of the other part of the treatment is useful, because the only difference is drug vs placebo.


Isn't your last paragraph describing a randomized controlled trial? The page lists 31 for Ivermectin [0] and (for comparison) 40 for HCQ [1]. I don't find the evidence for HCQ convincing, but Ivermectin seems solid from what I can tell. It's hard to explain 28 out of 31 randomized controlled trials showing a positive result with bias, since you would expect that bias to affect both HCQ and Ivermectin. Why would Ivermectin show much better results than HCQ? What kind of bias would affect Ivermectin much more than HCQ?

[0]: https://ivmmeta.com/#fig_fpr

[1]: https://hcqmeta.com/#fcite_rct (Figure 7, scroll down)


Nice!

The HCQ looks definitively like report+publication bias, but it would be nice to see an analysis by an statistic expert.

The Ivermectin looks more interesting. I read the first study on the list Chowdhury et al. https://c19ivermectin.com/chowdhury.html

* If I understand correctly it's no double blind, but as far as I can see both group get the same treatment. Also the criteria are hard (like death, instead of how much pain do you feel in the scale from 1 to 10), so my guess is that it's not a big problem here.

* It's strange that one group got Ivermectin and the other got HCQ. So why is this study not in the other site?

* I think it's not preregistered, so it's not clear how many similar experiments were made.

* The main result is that 0/60 patients with Ivermectin get hospitalized, but 2/56 patients with HCQ. To simplify the discussion, can I round it to 0/60 patients with Ivermectin get hospitalized, but 2/60 patients with HCQ?

* This looks like 1±1/60 in both case to me. I'm a Mathematician but I'm not an expert in statistic. Anyway I took to many laboratory classes for Physics, and it definitively looks like 1±1/60 that is very unconvincing. I don't know how they get 80.6% from that, but I can ask one of my friends.

* Anyway, let's use the null hypothesis and suppose that have Ivermectin and HCQ.no effects. And for simplicity, let's assume that the hospitalization rate of no treatment is 1/60. So if we repeat these kind of experiments with groups of 60 patients we must use a binomial distribution. From https://www.di-mgt.com.au/binomial-calculator.html with n=60 and p=0.01666666666

Probability of 0 cases: 36.5%

Probability of 1 case: 37.1%

Probability of 2 cases: 18.6%

Probability of more than 2 cases: 7.8%

So if we make many experiments, and each of them has 60 persons with Ivermectin and 60 persons with HCQ, then in 36.5%18.6%=6.8% of them the we will get only by a fluke a result like the reported. So if there are 16 not pre-registered studies, on of them may get a result like this. Pre-registration is very important.

Looking at the other studies, in early interventions there are too many that have 0 cases in the treatment branch. For late intervention treatments, most have not 0 cases in both groups. It's suspicious, and my guess is that is inclining the result.


I see your point that it could be P-hacking (as in https://xkcd.com/882/), but why would Ivermectin be so much more P-hacked than HCQ?


Let's look at the biggest studies

HQC:

RECOVERY (RCT) -9% 1.09 [0.97-1.23] death 421/1,561 790/3,155

Mitjà (RCT) 52% 0.48 [0.15-1.57] death 4/1,196 9/1,301

SOLIDARITY (RCT) -19% 1.19 [0.89-1.59] death 104/947 84/906

---

Ivermectin:

Seet (CLUS. RCT) 50% 0.50 [0.33-0.76] severe case 32/617 64/619 12mg

Elgazzar (RCT) 92% 0.08 [0.02-0.35] death 2/200 24/200 112mg

Mahmud (DB RCT) 86% 0.14 [0.01-2.75] death 0/183 3/183 12mg

---

My guess is: HCQ was very promising in early 2020 and there has been enough time to make big RCT. Ivermectin got more promising this year and the big RCT didn't finish yet, or the results are still not published.

(In a big trial with a few thousand patients, you need a lot of time, coordination and paperwork, so the result is less likely to not get published.)


If you're right, there should be a correlation with stidy size and results (I'll check for this tomorrow if I remember). It would still have to be explained why do small studies show a much worse result for HCQ than Ivermectin.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: