The medicine you took this morning has come a long way from the lab to your pill pack. First, there is extensive laboratory research. Next, animal testing. But before a drug can be approved for use, it must be tested on humans – in an expensive and complex process known as a clinical trial.
In its simplest form, a clinical trial looks like this: researchers enroll patients with the disease the experimental drug is targeting. The volunteers are randomly divided into two groups. One group receives the experimental drug; the other, called a control group, receives a placebo (a treatment that looks the same as the drug being tested, but has no effect). If the patients who receive the active drug show more improvement than those who receive the placebo, this is proof that the drug is effective.
One of the hardest parts of designing a trial is finding enough volunteers who meet the exact criteria for the study. Physicians may not know which trials might be suitable for their patients, and patients who wish to enroll may not have the necessary characteristics for a given trial. But artificial intelligence could make this job much easier.
Meet your twin
Digital twins are computer models that simulate real-world objects or systems. They behave almost the same, statistically, as their physical counterparts. NASA used a digital twin from the Apollo 13 spacecraft to help make repairs after an oxygen tank exploded, leaving engineers on Earth scrambling to make repairs 200,000 miles away.
Given enough data, scientists can create digital twins of people, using machine learning, a type of artificial intelligence in which programs learn from large amounts of data rather than being specifically programmed for the task. to accomplish. Digital twins of patients in clinical trials are created by training machine learning models on patient data from previous clinical trials and from individual patient records. The model predicts how the patient’s health would progress during the trial if given a placebo, essentially creating a simulated control group for a particular patient.
So here’s how it would work: one person, let’s call her Sally, is assigned to the group that receives the active drug. Sally’s digital twin (the computer model) is part of the control group. He predicts what would happen if Sally did not receive the treatment. The difference between Sally’s response to the drug and the model’s prediction of Sally’s response if she took the placebo instead would be an estimate of the treatment’s effectiveness for Sally.
Digital twins are also created for patients in the control group. By comparing predictions of what would happen to digital twins receiving the placebo with the humans who actually received the placebo, researchers can spot any problems in the model and make it more accurate.
Replacing or augmenting control groups with digital twins could help patient volunteers as well as researchers. Most people who join a trial do so with the hope of getting a new drug that might help them when previously approved drugs have failed. But there is a 50/50 chance that they will be placed in the control group and not receive the experimental treatment. Replacing control groups with digital twins could mean more people have access to experimental drugs.
The technology may be promising, but it’s not yet widely used – perhaps for good reason. Daniel Neill, PhD, is an expert in machine learning, including its applications in healthcare, at New York University. He points out that machine learning models depend on the availability of a lot of data and that it can be difficult to obtain high-quality data on individuals. Information about things like diet and exercise is often self-reported, and people aren’t always honest. They tend to overestimate the amount of exercise they get and underestimate the amount of junk food they eat, he says.
Considering rare adverse events could also be a problem, he adds. “Most likely, these are things you didn’t model in your control group.” For example, someone might have an unexpected negative reaction to a drug.
But Neill’s biggest concern is that the predictive model reflects what he calls “the status quo.” Suppose a major unexpected event – something like the COVID-19 pandemic, for example – changes everyone’s behaviors and people get sick. “That’s something these control models wouldn’t take into account,” he says. These unforeseen events, not taken into account in the control group, could distort the result of the trial.
Eric Topol, founder and director of the Scripps Research Translational Institute and an expert on the use of digital technologies in healthcare, thinks the idea is a great one.
, but not yet ready for prime time. “I don’t think clinical trials are going to change in the short term, because it requires multiple layers of data beyond health records, like genome sequence, gut microbiome, environmental data, etc. He predicts it will take years to be able to do large-scale trials using AI, especially for more than one disease. (Topol is also the editor of Medscape, WebMD’s sister website.)
Gathering enough quality data is a challenge, says Charles Fisher, PhD, founder and CEO of Unlearn.AI, a start-up pioneering digital twins for clinical trials. But, he says, solving this kind of problem is part of the company’s long-term goals.
According to Fisher, two of the most frequently cited concerns about machine learning models — privacy and bias — are already addressed. “Privacy is easy. We only work with already anonymized data.
As for bias, the issue is unresolved, but it’s irrelevant — at least to the outcome of the trial, according to Fisher. A well-documented problem with machine learning tools is that they can be trained on biased datasets – for example, those that underrepresent a particular group. But, says Fisher, because the trials are randomized, the results are insensitive to bias in the data. The trial measures how the test drug affects people in the trial based on a comparison with controls, and adjusts the model to more closely match real-world controls. Thus, according to Fisher, even if the choice of subjects for the trial is biased and the original datais biased, “We are able to design trials to be insensitive to this bias.”
Neill does not find this convincing. You can eliminate biases in a narrow randomized trial, by adjusting your model to correctly estimate the treatment effect for the study population, but you will simply reintroduce those biases when you try to generalize beyond the study. Unlearn.AI “does not compare treated individuals to controls,” says Neill. “It is a question of comparing individuals treated with model-based estimates what the individual’s outcome would have been had they been in the control group. Any error in these models or any event that they fail to anticipate can lead to systematic biases, that is, an overestimation or underestimation of the treatment effect.
But unlearn.AI goes ahead. He is already working with pharmaceutical companies to design trials for neurological diseases such as Alzheimer’s disease, Parkinson’s disease and multiple sclerosis. There’s more data on these diseases than on many others, so that was a good place to start. Fisher says the approach could eventually be applied to all diseases, dramatically shortening the time it takes to bring new drugs to market.
If this technology proves useful, these invisible siblings could benefit patients and researchers alike.