The medication you took this morning has come a long way from the lab to your pill box. First, there is extensive laboratory research. Then test the animals. But before a drug is approved for use, it must be tested in humans – in an expensive and complex process known as a clinical trial.
In its simplest form, a clinical trial does something like this: Researchers recruit patients with the disease for which the experimental drug is intended. Volunteers are randomly divided into two groups. one group gets the trial drug; The other, called the control group, gets a placebo (a treatment that looks identical to the drug being tested, but has no effect). If patients who got the active drug showed greater improvement than those who got the placebo, this is evidence that the drug is effective.
One of the most challenging parts of trial design is finding enough volunteers who meet the exact criteria for the study. Clinicians may not know which trials may suit their patients, and patients who wish to enroll may not have the characteristics needed for a particular trial. But artificial intelligence could make this task a lot easier.
meet your twin
Digital twins are computer models that simulate real-world objects or systems. They behave roughly the same way, statistically, as their physical counterparts. NASA used a digital twin of the Apollo 13 spacecraft to help make repairs after an oxygen tank exploded, leaving engineers on Earth scrambling to make repairs from 200,000 miles away.
When enough data is available, scientists can create digital twins of people, using machine learning, a type of artificial intelligence in which programs learn from large amounts of data rather than being programmed specifically for the task at hand. Digital twins for patients in clinical trials are created by training machine learning models on patient data from previous clinical trials and from individual patient records. The model predicts how a patient’s health would progress over the course of the trial if they were given a placebo, essentially creating a simulated control group for a particular patient.
Here’s how it would work: A person, let’s call her Sally, is assigned to the group that gets the active drug. Sally’s digital twin (computer model) is in the control group. It predicts what will happen if Sally does not get treatment. The difference between Sally’s response to the drug and the model’s prediction of Sally’s response if she took the placebo instead would be an estimate of how effective Sally’s treatment was.
Digital twins are also created for patients in the control group. By comparing predictions of what will happen to digital twins getting a placebo with humans who actually getting a placebo, researchers can spot any problems with the model and make it more accurate.
Replacing or augmenting control groups with digital twins can help volunteer patients as well as researchers. Most people who join a trial do so in the hope of obtaining a new drug that may help them when already approved medications have failed. But there is a 50/50 chance that they will be placed in the control group and not get the experimental treatment. Replacing control groups with digital twins could mean more people have access to experimental drugs.
The technology may be promising, but it’s not yet widely spread — perhaps for good reason. Daniel Neal, PhD, is an expert in machine learning, including its applications in healthcare, at New York University. He points out that machine learning models rely on having a lot of data, and it can be difficult to get high-quality data on individuals. Information about things like diet and exercise is often reported, and people are not always honest. They tend to overestimate the amount of exercise they get and underestimate the amount of junk food they eat, he says.
He adds that thinking about rare negative events can also be problematic. “Most likely, these are things that you didn’t design in your control group.” For example, someone could have an unexpected adverse reaction to a drug.
But Neil’s biggest concern is that the predictive model reflects what he calls “business as usual.” Say a major unexpected event – something like the COVID-19 pandemic, for example – changes everyone’s behavior patterns and makes people sick. “This is something these control models will not take into account,” he says. These unexpected events, which were not accounted for in the control group, could skew the outcome of the trial.
Eric Topol, founder and director of the Scripps Research Translational Institute and an expert on the use of digital technologies in healthcare, thinks the idea is brilliant.
And the But it’s not ready yet in prime time. “I don’t think clinical trials are going to change in the near term, because this requires multiple layers of data beyond health records, such as genome sequencing, gut microbiome, environmental data, etc.” He predicts that it will take years to be able to conduct large-scale trials with AI, particularly for more than one disease. (Tobol is also the editor-in-chief of Medscape, WebMD’s sister site.)
Collecting enough high-quality data is a challenge, says Charles Fisher, PhD, founder and CEO of Unlearn.AI, a leading startup in digital twins for clinical trials. But he says tackling this kind of problem is part of the company’s long-term goals.
Fisher says that two of the most commonly cited concerns about machine learning models — privacy and bias — have already been taken into account. “Privacy is easy. We only work with data that is already anonymised.”
When it comes to bias, the problem isn’t solving, but it’s irrelevant—at least to the outcome of the experiment, according to Fisher. A well-documented problem with machine learning tools is that they can be trained on biased data sets – for example, those that are not underrepresented in a particular set. But, Fisher says, because the trials are randomized, the results are not sensitive to data bias. The trial measures how the drug being tested affects the subjects in the trial based on a comparison with controls, and adjusts the model to more closely match the real controls. Therefore, according to Fisher, even if people’s choice of experiment is biased, the original dataBiased group, “We are able to design experiments that are insensitive to this bias.”
Neil doesn’t find this convincing. You can remove bias in a randomized trial in the narrow sense, by adjusting your model to correctly estimate the effect of treatment on the study population, but you will reintroduce these biases when you try to generalize after the study. Unlearn.AI “Does not compare treated individuals to controls,” says Neal. “She compares treated individuals to Model-Based Estimates What would the individual’s score be like if they were in the control group. Any errors in those models or any events they do not anticipate can lead to methodological biases—that is, increases or decreases in treatment effect estimates.”
But you don’t know. Artificial intelligence is moving forward. It is already working with drug companies to design trials for neurodegenerative diseases, such as Alzheimer’s disease, Parkinson’s disease, and multiple sclerosis. There is more data on these diseases than there are many others, so it was a good place to start. This approach could eventually be applied to every disease, Fisher says, greatly reducing the time it takes to bring new drugs to market.
If this technique proves useful, these invisible siblings could benefit patients and researchers alike.