- Presenter: Emma McKay, Inka Health
- Topic: Borrowing Strength: Applications of Bayesian Methods to Clinical Trials in Rare Diseases
- Abstract: Evaluating the efficacy of drugs developed to treat rare/orphan diseases presents a particular challenge as it may be impractical or infeasible to design a well-powered conventional randomized controlled trial (RCT). This may be due to the difficulty of recruiting patients, and/or ethical considerations of assigning patients to a placebo or standard of care control when there is a serious unmet need. To address these challenges, methods such as construction of external control arms from real-world data or historical trials, as well as hybrid-controlled designs have gained traction in recent years. This talk will provide a brief introduction to Bayesian borrowing methods and highlight some recent applications to rare disease settings. Methods discussed will include power priors, meta-analytic predictive priors (MAP) and Bayesian hierarchical modelling (BHM).
- Presenter: Ahmad Sofi-Mahmudi, McMaster University
- Topic: COVID-19 related research data availability and quality according to the FAIR principles: A meta-research study
- Abstract: Background: As per the FAIR principles (Findable, Accessible, Interoperable, and Reusable), scientific research data should be findable, accessible, interoperable, and reusable. The COVID-19 pandemic has led to massive research activities and an unprecedented number of topical publications in a short time.Objective: Our objective was to investigate the availability of open data in COVID-19-related research and to assess compliance with FAIRness. Methods: We conducted a comprehensive search and retrieved all open-access articles related to COVID-19 from journals indexed in PubMed, published until June 2023. Using a validated automated tool, we identified articles that included a link to their data hosted in a public repository. We then screened the link and included those repositories which included data specifically for their pertaining paper. Subsequently, we automatically assessed the adherence of the repositories to the FAIR principles. We reported descriptive analysis for each article type, journal category and repository. Results: 5,700 URLs were included in the final analysis, sharing their data in a general-purpose repository. The mean (standard deviation, SD) level of compliance with FAIR metrics was 9.4 (4.88). The percentages of moderate or advanced compliance were as follows: Findability: 100.0%, Accessibility: 21.5%, Interoperability: 46.7%, and Reusability: 61.3%. The overall and component-wise monthly trends were consistent over the follow-up. Reviews (9.80, SD=5.06, n=160), and articles in dental journals (13.67, SD=3.51, n=3) and Harvard Dataverse (15.79, SD=3.65, n=244) had the highest mean FAIRness scores, whereas letters (7.83, SD=4.30, n=55), articles in neuroscience journals (8.16, SD=3.73, n=63), and those deposited in GitHub (4.50, SD=0.13, n=2,152) showed the lowest scores. Regression models showed that the most influential factor on FAIRness scores was the repository (R2=0.809). Conclusion: This paper underscored the potential for improvement across all facets of FAIR principles, with a specific emphasis on enhancing Interoperability and Reusability in the data shared during the COVID-19 pandemic.
- Presenter: Nicholas Mitsakakis, CHEO
- Topic: Evaluation of machine learning solutions in healthcare
- Abstract: Machine Learning (ML) models can be used to support complex clinical decision making and to improve processes and outcomes in healthcare. Development and implementation of an ML solution involves multiple steps, including exploration, study design, data preparation, model development, evaluation and deployment. Here, I will discuss various aspects and steps of the evaluation of a developed model in a clinical care setting, including internal and external validation, clinical validation, impact on clinical outcomes, post deployment evaluation, and other considerations.