Hundreds of AI tools have been built to catch Covid. None

[ad_1]

It also blurs the origin of certain datasets. This could mean that researchers overlooked important features that distorted the training of their models. Many unwittingly used a dataset containing breast scans of non-covid children as an example of what non-covid cases look like. But as a result, AIs have learned to identify children, not covid.

Driggs’ group trained their model using a dataset containing a mix of scans taken while patients were lying down and standing. Because patients scanned at bedtime are more likely to be seriously ill, AI has learned incorrectly to estimate serious covid risk from a person’s location.

In yet other cases, some artificial intelligence has been found to detect the text font certain hospitals use to label scans. As a result, fonts from hospitals with more serious caseloads have become predictors of covid risk.

Mistakes like this seem obvious in retrospect. If researchers are aware of these, they can also be corrected by adjusting the models. It is possible to admit the shortcomings and publish a less accurate, but less misleading model. But many tools have been developed either by artificial intelligence researchers who don’t have the medical expertise to detect flaws in data, or by medical researchers who don’t have the mathematical skills to compensate for those flaws.

A more subtle issue that Driggs highlights is consolidation bias, or the bias that occurs at the point at which a dataset is labeled. For example, many medical scans were labeled according to whether the radiologists who created them showed covid. However, this includes or incorporates any bias of the doctor in question into the ground truth of a dataset. It would be much better to label a medical scan with the result of a PCR test rather than the opinion of a single doctor, Driggs says. But there isn’t always time for statistical refinements in busy hospitals.

This has not prevented some of these tools from rapidly entering clinical practice. Wynants says it’s not clear which ones or how they are used. Hospitals will sometimes say they use a tool for research purposes only, making it difficult to assess how much doctors trust them. “There’s a lot of secrecy,” he says.

Wynants asked a company that markets deep learning algorithms to share information about his approach, but he did not return. He later found several published models from researchers affiliated with this company, all of which carry a high risk of bias. “We don’t know what the company is actually implementing,” he says.

According to Wynants, some hospitals are even signing non-disclosure agreements with medical AI suppliers. When he asked the doctors what algorithms or software they used, they said they were sometimes not allowed to tell him.

how to fix

What’s the fix? Better data might help, but in times of crisis that’s a big question. Making the most of the datasets we have is more important. The simplest move would be for AI teams to collaborate more with clinicians, Driggs says. Researchers also need to share their models and describe how they were trained so others can test and improve them. “These are two things we can do today,” he says. “And they would solve maybe 50% of the problems we identified.”

Bilal Mateen, a doctor who leads clinical technology research at the Wellcome Trust, a global health research charity based in London, says data would be easier to obtain if the formats were standardized.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *