"First-in-human dosing," also frequently referred to as “FIH studies,” refers to the initial administration of an investigational drug or treatment to human subjects. This typically occurs during Phase I clinical trials, which are designed to evaluate the safety, tolerability, and pharmacokinetics of the drug. The primary goal of first-in-human dosing is to determine a safe dosage range and identify any potential side effects for each individual patient. This phase is crucial in the drug development process as it marks the transition from preclinical testing to human trials. Dose selection is considered the most difficult part of Phase I clinical trials as it requires conversion of the animal dosing regimen to a human one.
The models involved in traditional methods of FIH dosing are often based on animal studies and frequently fail to accurately replicate human physiology. Animal models, while useful in some contexts, oversimplify the complexities of human biology, leading to significant discrepancies when extrapolating results to human subjects. This oversimplification can result in misleading predictions about drug efficacy and safety, which can have serious consequences in clinical trials.
One major difficulty in FIH dosing is considering variability in human response. The complexity of possible individual responses is challenging to account for using traditional FIH dosing methods, and failure to properly consider these possible variations can result in heightened risk of adverse effects in clinical trial participants. It can also increase the already high cost/time investment of clinical trials. This underscores the need for more sophisticated approaches in FIH dosing that can account for the myriad factors that influence individual human response.
At the core of this complexity around variability in human response is our genetic makeup. Genes can influence how our bodies react to medications, with certain genetic variations predisposing individuals to different levels of drug sensitivity or resistance. This genetic heterogeneity means that a drug that works wonders for one person might have minimal efficacy or even cause adverse effects on another.
The human body is equipped with enzymes that break down and eliminate drugs, but the efficiency of these processes can differ greatly from person to person. This variability can lead to substantially different levels of drug exposure, where some individuals may experience inadequate therapeutic effects due to rapid drug clearance, while others may suffer from toxicity due to slow metabolism. These differences are difficult to predict a priori. This unpredictability makes it challenging to determine the optimal dosage for each patient, especially during the critical phase of first-in-human dosing.
Beyond genetics, environmental factors also play crucial roles in shaping drug responses. Environmental influences such as diet, lifestyle, and exposure to pollutants can alter how the body processes medications.
Further compounding these issues is the presence of comorbidities or underlying health conditions. Pre-existing medical conditions can significantly alter how a person responds to a drug. This interplay between drugs and co-morbidities adds another dimension to the variability in human response, making it even more challenging to predict how a new drug will behave in a diverse patient population.
AI addresses the issues inherent in traditional methods of first-in-human dosing by leveraging advanced analytics and machine learning techniques to create more accurate and reliable forecasting tools. Traditional methods often rely on animal data and methods such as the NOAEL (No-Observed-Adverse-Effect Level) method and MABEL (Minimum Anticipated Biological Effect Level.) These can be imprecise and poorly predictive of human responses, and they require numerous clinical experiments with very low doses, which takes time and resources as well as poses some risks for volunteers. AI, on the other hand, can integrate and analyze vast amounts of diverse data to build sophisticated predictive models. By identifying complex patterns and relationships within these datasets, AI algorithms can develop models that better predict human pharmacokinetics, pharmacodynamics, and toxicity.
It’s important to keep in mind, however, that while AI can produce many benefits in the first-in-human dosing process, regular or pure AI alone will still have its limitations. One limitation is that pure AI would require a vast amount of individualized data to learn all of the necessary patterns to predict variations between patients, such as the genetic and clearance data previously mentioned. This quantity of data for the entire possible range of individuals is not available.
A second limitation is that pure AI is not infused with biological/chemical knowledge like a hybrid AI approach is, and this added knowledge in a hybrid AI approach is used to enrich and guardrail the AI. Pure AI will not know such guardrails, or, at best, will require a lot of data to learn these guardrails - leading to potentially low reliability and unexplainable results.
On the flip side, pure knowledge alone can only work when all pieces are known - but we cannot utilize pure knowledge to learn about how a completely new drug will work. AI can address that with learning patterns. So, hybrid AI becomes important when we want to make first-in-human dose predictions about a new drug (or one with limited data) while taking into account individual variations and keeping within biological/chemical guardrails.
BIOiSIM’s hybrid AI approach uniquely addresses the major challenge of variability in human response by using hybrid AI to integrate physiological and genetic variations into its calculations to obtain individualized dose-exposure results in different organs. Furthermore, the platform integrates real world evidence into its considerations and then enriches that data with a “virtual patients” approach in which the models generate synthetic patient data to enable a more detailed analysis, with the goal of identifying whether certain patients would respond differently to each drug.
BIOiSIM is also unique in its ability to actually quantify likelihood of clinical success by means of its Translational IndexⓇ, which calculates a “score” from SMILES, dose, and target information. Think of this score as being similar to a credit score in that it indicates the likelihood of a drug candidate’s success. The Translational Index results in a score for each dose, where the optimal dosing is at its max TI score. This is more powerful than ‘standard’ approaches like eff-tox models used in pharmacometrics, as those models require some information on efficacy and toxicity of the drug in question, whereas this approach does not. Overall, the Translational Index is the first scoring system of its kind to be able to quantify the variability in toxicity and efficacy across a patient population and by extension, the likelihood of a drug candidate’s success in different patient groups.
These new predictive capabilities and consequent improved ability to consider variability in response naturally results in a reduced risk to clinical trial participants, as well as reductions in the cost and time investment of regular clinical trials. After working with VeriSIM Life, Dr. Annick Menetrey stated in this client webinar, “We were quite early on (at the beginning of the project), with not a lot of data. And then (with BIOiSIM) we had the predictions quickly, and this was really a support for our CMC activities and clinical activities… Your team supported the selection of the starting dose in the initial clinical trial. Having that dosage prediction super early then supported the ability to start discussing the clinical design, which in turn supported the selection of the dosing regimen.” Debiopharm estimated that VeriSIM Life was able to produce these predictions in less than three months.
To learn more, check out our other content on dose optimization and using AI to determine dose selection.